file_get_contents shows error on a particular domain












0















I am using file_get_contents() to fetch the contents from a page. It was working perfectly, but suddenly stopped working and started to show the error below:




"Warning: file_get_contents(https://uae.souq.com/ae-en/apple-iphone-x-with-facetime-256gb-4g-lte-silver-24051446/i/): failed to open stream: HTTP request failed! in /home/xxx/xxxx/xxx/index.php on line 6.




So I tried the same code on localserver, it was working perfectly. Then I tried on another server, and it was working perfectly there too. So I contacted the hosting provider, they said the problem is with the url that they may be preventing the access. So I tried another url (https://www.w3schools.com/) and it is getting contents without any error.



Now I am really confused what the problem is. If the problem is with the server, other urls shouldn't have worked. And if the problem is with url, it shouldn't have worked on the second server and local server.



Here is the test code:



<?php
$html= file_get_contents("https://uae.souq.com/ae-en/apple-iphone-x-with-facetime-256gb-4g-lte-silver-24051446/i/");
echo $html;
?>


What is the problem here? Even if the problem is with url or server, why was it working perfeclty earlier?










share|improve this question

























  • Your code is working fine, maybe they have put restrictions to your IP.

    – M A
    Nov 15 '18 at 4:22











  • You can always use curl in php, php.net/manual/en/book.curl.php

    – M A
    Nov 15 '18 at 4:24













  • Possible duplicate of PHP file_get_contents() returns "failed to open stream: HTTP request failed!"

    – M A
    Nov 15 '18 at 4:25











  • @MA, it is working fine on some servers. Do you mean restrictions to my server IP?

    – Abdulla
    Nov 15 '18 at 4:38











  • @MA, tried curl when I started to get this error, getting some issues there as I have never tried curl before

    – Abdulla
    Nov 15 '18 at 4:39
















0















I am using file_get_contents() to fetch the contents from a page. It was working perfectly, but suddenly stopped working and started to show the error below:




"Warning: file_get_contents(https://uae.souq.com/ae-en/apple-iphone-x-with-facetime-256gb-4g-lte-silver-24051446/i/): failed to open stream: HTTP request failed! in /home/xxx/xxxx/xxx/index.php on line 6.




So I tried the same code on localserver, it was working perfectly. Then I tried on another server, and it was working perfectly there too. So I contacted the hosting provider, they said the problem is with the url that they may be preventing the access. So I tried another url (https://www.w3schools.com/) and it is getting contents without any error.



Now I am really confused what the problem is. If the problem is with the server, other urls shouldn't have worked. And if the problem is with url, it shouldn't have worked on the second server and local server.



Here is the test code:



<?php
$html= file_get_contents("https://uae.souq.com/ae-en/apple-iphone-x-with-facetime-256gb-4g-lte-silver-24051446/i/");
echo $html;
?>


What is the problem here? Even if the problem is with url or server, why was it working perfeclty earlier?










share|improve this question

























  • Your code is working fine, maybe they have put restrictions to your IP.

    – M A
    Nov 15 '18 at 4:22











  • You can always use curl in php, php.net/manual/en/book.curl.php

    – M A
    Nov 15 '18 at 4:24













  • Possible duplicate of PHP file_get_contents() returns "failed to open stream: HTTP request failed!"

    – M A
    Nov 15 '18 at 4:25











  • @MA, it is working fine on some servers. Do you mean restrictions to my server IP?

    – Abdulla
    Nov 15 '18 at 4:38











  • @MA, tried curl when I started to get this error, getting some issues there as I have never tried curl before

    – Abdulla
    Nov 15 '18 at 4:39














0












0








0








I am using file_get_contents() to fetch the contents from a page. It was working perfectly, but suddenly stopped working and started to show the error below:




"Warning: file_get_contents(https://uae.souq.com/ae-en/apple-iphone-x-with-facetime-256gb-4g-lte-silver-24051446/i/): failed to open stream: HTTP request failed! in /home/xxx/xxxx/xxx/index.php on line 6.




So I tried the same code on localserver, it was working perfectly. Then I tried on another server, and it was working perfectly there too. So I contacted the hosting provider, they said the problem is with the url that they may be preventing the access. So I tried another url (https://www.w3schools.com/) and it is getting contents without any error.



Now I am really confused what the problem is. If the problem is with the server, other urls shouldn't have worked. And if the problem is with url, it shouldn't have worked on the second server and local server.



Here is the test code:



<?php
$html= file_get_contents("https://uae.souq.com/ae-en/apple-iphone-x-with-facetime-256gb-4g-lte-silver-24051446/i/");
echo $html;
?>


What is the problem here? Even if the problem is with url or server, why was it working perfeclty earlier?










share|improve this question
















I am using file_get_contents() to fetch the contents from a page. It was working perfectly, but suddenly stopped working and started to show the error below:




"Warning: file_get_contents(https://uae.souq.com/ae-en/apple-iphone-x-with-facetime-256gb-4g-lte-silver-24051446/i/): failed to open stream: HTTP request failed! in /home/xxx/xxxx/xxx/index.php on line 6.




So I tried the same code on localserver, it was working perfectly. Then I tried on another server, and it was working perfectly there too. So I contacted the hosting provider, they said the problem is with the url that they may be preventing the access. So I tried another url (https://www.w3schools.com/) and it is getting contents without any error.



Now I am really confused what the problem is. If the problem is with the server, other urls shouldn't have worked. And if the problem is with url, it shouldn't have worked on the second server and local server.



Here is the test code:



<?php
$html= file_get_contents("https://uae.souq.com/ae-en/apple-iphone-x-with-facetime-256gb-4g-lte-silver-24051446/i/");
echo $html;
?>


What is the problem here? Even if the problem is with url or server, why was it working perfeclty earlier?







php url server hosting file-get-contents






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 15 '18 at 8:03







Abdulla

















asked Nov 15 '18 at 3:50









AbdullaAbdulla

151319




151319













  • Your code is working fine, maybe they have put restrictions to your IP.

    – M A
    Nov 15 '18 at 4:22











  • You can always use curl in php, php.net/manual/en/book.curl.php

    – M A
    Nov 15 '18 at 4:24













  • Possible duplicate of PHP file_get_contents() returns "failed to open stream: HTTP request failed!"

    – M A
    Nov 15 '18 at 4:25











  • @MA, it is working fine on some servers. Do you mean restrictions to my server IP?

    – Abdulla
    Nov 15 '18 at 4:38











  • @MA, tried curl when I started to get this error, getting some issues there as I have never tried curl before

    – Abdulla
    Nov 15 '18 at 4:39



















  • Your code is working fine, maybe they have put restrictions to your IP.

    – M A
    Nov 15 '18 at 4:22











  • You can always use curl in php, php.net/manual/en/book.curl.php

    – M A
    Nov 15 '18 at 4:24













  • Possible duplicate of PHP file_get_contents() returns "failed to open stream: HTTP request failed!"

    – M A
    Nov 15 '18 at 4:25











  • @MA, it is working fine on some servers. Do you mean restrictions to my server IP?

    – Abdulla
    Nov 15 '18 at 4:38











  • @MA, tried curl when I started to get this error, getting some issues there as I have never tried curl before

    – Abdulla
    Nov 15 '18 at 4:39

















Your code is working fine, maybe they have put restrictions to your IP.

– M A
Nov 15 '18 at 4:22





Your code is working fine, maybe they have put restrictions to your IP.

– M A
Nov 15 '18 at 4:22













You can always use curl in php, php.net/manual/en/book.curl.php

– M A
Nov 15 '18 at 4:24







You can always use curl in php, php.net/manual/en/book.curl.php

– M A
Nov 15 '18 at 4:24















Possible duplicate of PHP file_get_contents() returns "failed to open stream: HTTP request failed!"

– M A
Nov 15 '18 at 4:25





Possible duplicate of PHP file_get_contents() returns "failed to open stream: HTTP request failed!"

– M A
Nov 15 '18 at 4:25













@MA, it is working fine on some servers. Do you mean restrictions to my server IP?

– Abdulla
Nov 15 '18 at 4:38





@MA, it is working fine on some servers. Do you mean restrictions to my server IP?

– Abdulla
Nov 15 '18 at 4:38













@MA, tried curl when I started to get this error, getting some issues there as I have never tried curl before

– Abdulla
Nov 15 '18 at 4:39





@MA, tried curl when I started to get this error, getting some issues there as I have never tried curl before

– Abdulla
Nov 15 '18 at 4:39












1 Answer
1






active

oldest

votes


















0














It sounds like that site (souq.com) has blocked your server. The block may be temporary or it may be permanent. This may have happened because you made too many requests in a short time, or did something else that looked "suspicious," which triggered a mechanism that prevents misbehaving robots from scraping the site.



You can try again after a while. Another thing you can try is setting the User-Agent request header to impersonate a browser. You can find how to do that here: PHP file_get_contents() and setting request headers



If your intention is to make a well behaved robot, you should set the User-Agent header to something that identifies the request as coming from a bot, and follow the rules the site specifies in its robots.txt.






share|improve this answer
























  • That may be the case, because I had set up a cron job that gets the contents every minute. Does setting request headers resolve it if it is blocked permanently?

    – Abdulla
    Nov 15 '18 at 4:47











  • It may help to set the user-agent header to something a real browser uses

    – Joni
    Nov 15 '18 at 4:55











  • Can you please explain how to set the User-Agent header so that it looks the request is coming from a bot?

    – Abdulla
    Nov 15 '18 at 8:52











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53312170%2ffile-get-contents-shows-error-on-a-particular-domain%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









0














It sounds like that site (souq.com) has blocked your server. The block may be temporary or it may be permanent. This may have happened because you made too many requests in a short time, or did something else that looked "suspicious," which triggered a mechanism that prevents misbehaving robots from scraping the site.



You can try again after a while. Another thing you can try is setting the User-Agent request header to impersonate a browser. You can find how to do that here: PHP file_get_contents() and setting request headers



If your intention is to make a well behaved robot, you should set the User-Agent header to something that identifies the request as coming from a bot, and follow the rules the site specifies in its robots.txt.






share|improve this answer
























  • That may be the case, because I had set up a cron job that gets the contents every minute. Does setting request headers resolve it if it is blocked permanently?

    – Abdulla
    Nov 15 '18 at 4:47











  • It may help to set the user-agent header to something a real browser uses

    – Joni
    Nov 15 '18 at 4:55











  • Can you please explain how to set the User-Agent header so that it looks the request is coming from a bot?

    – Abdulla
    Nov 15 '18 at 8:52
















0














It sounds like that site (souq.com) has blocked your server. The block may be temporary or it may be permanent. This may have happened because you made too many requests in a short time, or did something else that looked "suspicious," which triggered a mechanism that prevents misbehaving robots from scraping the site.



You can try again after a while. Another thing you can try is setting the User-Agent request header to impersonate a browser. You can find how to do that here: PHP file_get_contents() and setting request headers



If your intention is to make a well behaved robot, you should set the User-Agent header to something that identifies the request as coming from a bot, and follow the rules the site specifies in its robots.txt.






share|improve this answer
























  • That may be the case, because I had set up a cron job that gets the contents every minute. Does setting request headers resolve it if it is blocked permanently?

    – Abdulla
    Nov 15 '18 at 4:47











  • It may help to set the user-agent header to something a real browser uses

    – Joni
    Nov 15 '18 at 4:55











  • Can you please explain how to set the User-Agent header so that it looks the request is coming from a bot?

    – Abdulla
    Nov 15 '18 at 8:52














0












0








0







It sounds like that site (souq.com) has blocked your server. The block may be temporary or it may be permanent. This may have happened because you made too many requests in a short time, or did something else that looked "suspicious," which triggered a mechanism that prevents misbehaving robots from scraping the site.



You can try again after a while. Another thing you can try is setting the User-Agent request header to impersonate a browser. You can find how to do that here: PHP file_get_contents() and setting request headers



If your intention is to make a well behaved robot, you should set the User-Agent header to something that identifies the request as coming from a bot, and follow the rules the site specifies in its robots.txt.






share|improve this answer













It sounds like that site (souq.com) has blocked your server. The block may be temporary or it may be permanent. This may have happened because you made too many requests in a short time, or did something else that looked "suspicious," which triggered a mechanism that prevents misbehaving robots from scraping the site.



You can try again after a while. Another thing you can try is setting the User-Agent request header to impersonate a browser. You can find how to do that here: PHP file_get_contents() and setting request headers



If your intention is to make a well behaved robot, you should set the User-Agent header to something that identifies the request as coming from a bot, and follow the rules the site specifies in its robots.txt.







share|improve this answer












share|improve this answer



share|improve this answer










answered Nov 15 '18 at 4:37









JoniJoni

77.1k998153




77.1k998153













  • That may be the case, because I had set up a cron job that gets the contents every minute. Does setting request headers resolve it if it is blocked permanently?

    – Abdulla
    Nov 15 '18 at 4:47











  • It may help to set the user-agent header to something a real browser uses

    – Joni
    Nov 15 '18 at 4:55











  • Can you please explain how to set the User-Agent header so that it looks the request is coming from a bot?

    – Abdulla
    Nov 15 '18 at 8:52



















  • That may be the case, because I had set up a cron job that gets the contents every minute. Does setting request headers resolve it if it is blocked permanently?

    – Abdulla
    Nov 15 '18 at 4:47











  • It may help to set the user-agent header to something a real browser uses

    – Joni
    Nov 15 '18 at 4:55











  • Can you please explain how to set the User-Agent header so that it looks the request is coming from a bot?

    – Abdulla
    Nov 15 '18 at 8:52

















That may be the case, because I had set up a cron job that gets the contents every minute. Does setting request headers resolve it if it is blocked permanently?

– Abdulla
Nov 15 '18 at 4:47





That may be the case, because I had set up a cron job that gets the contents every minute. Does setting request headers resolve it if it is blocked permanently?

– Abdulla
Nov 15 '18 at 4:47













It may help to set the user-agent header to something a real browser uses

– Joni
Nov 15 '18 at 4:55





It may help to set the user-agent header to something a real browser uses

– Joni
Nov 15 '18 at 4:55













Can you please explain how to set the User-Agent header so that it looks the request is coming from a bot?

– Abdulla
Nov 15 '18 at 8:52





Can you please explain how to set the User-Agent header so that it looks the request is coming from a bot?

– Abdulla
Nov 15 '18 at 8:52




















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53312170%2ffile-get-contents-shows-error-on-a-particular-domain%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Bressuire

Vorschmack

Quarantine