Include() function works with URI but not with full URL

I have this code to capture page visits:

$host = $_SERVER['HTTP_HOST'];
$page = isset($_SERVER['PHP_SELF']) ? $_SERVER['PHP_SELF'] : '';
date_default_timezone_set('America/Chicago'); //change server time zone to UTC-600 (Chicago, IL time)
$date = date("Y/m/d");
$time = date("H:i:s");

on every page I have this:

<?php
	include("https://www.domain.com/r/pagevisit.php");
?>

this results in my traffic report echoing out the page visited as /r/pagevisit.php instead of the real page. This code works just fine on another domain I manage, but on that site, the code on any given page looks like this:

<?php
	include("../reporting/pagevisit.php");
?>

I have allow_URL_include turned on in the .INI file. is what I’m doing with the full path URL wrong? does that ever work? I saw on the S/O forum that one expert says this is not to be done, ever.

thanks!

Sending the content of PHP files over the network can be a very high security risk thus it all sounds as a very bad idea to do.

In other words: DON’T

(And what is it that you try to get done?)

1 Like

I’m not sending any content over the network, am I frank? I’m capturing visitor data and inserting it into a mysql database. regarding your “security risk” comment, is that an issue? if a hacker was to snoop at that, the only thing they would see is a visitor’s ISP’s Ip address. what could they possibly do with that? or why would they care?

Frank,

are you talking about the username and password credentials that are in the PHP file itself being exposed? I"m a little in the dark as to what you mean by “security risk”

Okay let me take another approach. If you do this:

include("https://www.domain.com/r/pagevisit.php");

Then you are sending your raw php code over the network. In the in the best scenario, it remains inside the building (intranet) but in the worst you will send it across the world. Your php code can contain very sensitive data.

The difference between URL’s and Filepaths:

Here you are using a filepath (you are reading from your harddisk):

include("../reporting/pagevisit.php");

And it is exactly how you should use it. You go one (or more) directories up in the tree so that you will retrieve your php files - which are not public php files but libraries or configuration files - from outside of the DocumentRoot where nobody have access from another device.

Anyway with php include directive always use a filepath and set that configuration variable allow_URL_include to off.

When you use a URL in an include/require statement, it causes a http request to that URL. When you do this for a file on your own web server, your web server is making a http request to itself, consuming two additional http connections and causing an additional web server process. This takes probably 100+ times longer than including the file through the file system. The response you get back is whatever the code at the requested URL outputs. You do NOT get the raw php code in this response (cases where hackers take advantage of web sites that include whatever is specified via a $_GET parameter, with allow_URL_include ON, intentionally do output raw php code in the response so that it will get executed on the requesting server.)

ok you guys,

I understand what you’re talking about. thanks, I appreciate it. I will change the URLs to file paths and change my INI back to ‘‘off’’. but can I ask you guys…this domain has literally 30 subdirs under the root dir. there is nothing in each of these subdirs except 1 index.php file for each, which is where all the content is displayed. do you think I should stop that and go right to storing all the content in MYSQL db?

oh and frank and phdr,

is there any better way to go about capturing visitor data than what I’m doing? if I remember right, I saw this method when doing the research online about how to capture visitor data. what is an easier method of doing this, if any?

thanks guys!

Jumping in with my two-cents…

Normally, there are two types of visitors to any site. The first is “users” who must log in. When they log in, you capture their IP or any other info like when they logged in and logged out. This is straightforward and is handled inside the login script. The other is just a visitor that looks at you site without logging in as a member. Normally, you do not keep tract of that since there could be millions of them. Also, Google and other search engines keep checking your site for updates unless you block them.
Some programmers like to keep the IP address of visitor’s and see how often they visit. That would be done in your config file at the top of every page. If you do that, you can see how many times a day a certain IP accesses your site.
In both of these, you can keep records of their accesses to your pages by simply adding tracking to your config file at the top of each page. But, this gets very tedious to maintain and monitor. Most site owners do not care how many times a visitor looks at the site. But, security wise, sometimes you do this to block out hackers or some countries. I had to do this once on a blog as China and Russia were both hitting it every 30 minutes and often inserting fake posts. I used the info to block the offending IP’s. Worked well.
I guess it really depends on what visitor data you want to keep track of. Just my humble opinion…

phdr is right, i was confused. Never tried to include php files using te network anyway…

Yes. It is time to switch from you manually creating folders/files and navigation, to letting the computer output ‘virtual’ folders/files and dynamically produce navigation based on content stored in a database.

Yes. It is time to switch from you manually creating folders/files and navigation, to letting the computer output ‘virtual’ folders/files and dynamically produce navigation based on content stored in a database.

well I understand that, phdr. But I rarely do that sort of thing, although I have looked at how wordpress does their dynamics. would I be doing the same type of thing? I started this project using wordpress, but since that app sucks, I discarded it and wrote my own. Would I be storing my page content in the MYSQL db the same way wordpress stores its content in the db everytime a page is updated? there is a record stored whenever a page is updated and the “publish” buttton is pushed. it is stored in the “post” field and there are plenty of other fields in there too, most of which mean nothing.

thanks!

Sponsor our Newsletter | Privacy Policy | Terms of Service