Search Results

Search found 52585 results on 2104 pages for 'web host'.

Page 261/2104 | < Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >

  • Photoshop, slice, Dreamweaver, web?

    - by Omega
    So I am playing around with Photoshop and Dreamweaver. I have created a site layout, and have used the slice function on it. Next, I saved as html & images. In Dreamweaver, I open such html file and I fill the page with content, links, etc. I have a website and everything, and I would like to use my newly created html page on it. But, obviously, if I copy & paste the html to my website it won't work because it will lack the images. But two things: I can't find the images, and apparently they are a lot. I am sure I am doing a great mistake regarding the images. Can someone help me?

    Read the article

  • Grapeshot crawler ignoring robots.txt

    - by QF_Developer
    Has anyone come across a crawler called Grapeshot? They are hammering the same page repeatedly on our website. I believe they are looking for ad related keywords, based on previous content ad campaigns. The odd thing is we never ran any such campaigns on the page they are so interested in. We do have only a few pages running AdSense, is this what has attracted Grapeshot? I've added the following declaration to my robots.txt, but they don't seem to be honouring it? User-agent: grapeshot Disallow: / Any ideas on how to block this nuisance crawler? I'm starting to think the best way is by setting up IP rules in IIS?

    Read the article

  • What are the drawbacks of sending XML to browsers and let them apply XSLT?

    - by MainMa
    Context Working as a freelance developer, I often made websites completely based on XSLT. In other words, on every request, an XML file is generated, containing everything we need to know about the page content: the name of the user currently logged in, the top menu entries, if this menu is dynamic/configurable, the text to display in a specific area of the page, etc. Then XSL process (caches, etc.) it to HTML/XHTML page to send to the browser. It has a good point to make it easier to create small-scale websites, especially with PHP. It is a sort of template engine, but which I prefer to other template engines because it's much more powerful than most of template engines, and because I know it better and like it. It is also possible, when need, to give an access to raw XML data on demand for an automated access, without the need to create separate APIs. Of course, it will fail completely on any medium-scale or large-scale website, since, even with good caching techniques, XSL still degrades overall website performance and requires more CPU serverside. Question Modern browsers have the ability to take an XML file and to transform it with an associated XSL file declared in XML like <?xml-stylesheet href="demo.xslt" type="text/xsl"?>. Firefox 3 can do it. Internet Explorer 8 can do it too. It means that it is possible to migrate XSL processing from the server to the client side for 50% of users (according on browser statistics on several websites where I may want to implement this). It means that those 50% of users will receive only the XML file at each request, thus reducing their and server's bandwidth (XML file being much shorter than its processed HTML analog), and reducing server's CPU usage. What are the drawbacks of this technique? I thought about several ones, but it doesn't apply in this situation: Difficult implementation and the need to choose, based on the browser request, when to send raw XML and when to transform it to HTML instead. Obviously, the system will not be much more difficult then the actual one. The only change to make is to add XSL file link to every XML, and to add a browser check. More IO and bandwidth usage, since the XSLT file will be downloaded by the browsers, instead of being cached by the server. I don't think it will be a problem, since XSLT file will be cached by the browsers (like images, or CSS, or JavaScript files are cached actually). Possibly some problems on client side, like maybe problems when saving a page in some browsers. Difficulty to debug code: it is impossible to obtain an HTML source the browser is actually using, since the only displayed source is the downloaded XML. On the other hand, I rarely go look at HTML code on client side, and in most cases, it is unusable directly (whitespace being removed).

    Read the article

  • Will using two different tracking codes affect my SERP

    - by Danny Hefer
    Hello everyone and thanks for your time! I am now facing a problem after a site migration. New site is basically an improved version of old site, with the same content and some extras. After pointing the domain name to the new site, the old site was still online for a while but didn't get any traffic. The new site has its own tracking code. So, old tracking code has age (something like 7 years) but no visitors for a month, but new tracking code is a month old with an acceptable traffic. How to you think google will react if I add old tracking code to new site? Thanks by advance!

    Read the article

  • Copyright of pictures upload to a website?

    - by All
    I want to run a website like stock photos. How can I be sure that the uploader is real copyright holder of the picture? Is it possible to leave the responsibility of this copyright claim to the uploader or at last the webmaster is responsible for the website content? It generally confuses me, as for example, stock photo websites needs form signed by the model for photos showing human face. How they can be sure that the signature actually belongs to the model? How they keep them safe from possible lawsuit in this case (e.g. if selling photos of a models with a fake signature?)

    Read the article

  • How to Choose a Web Developer to Create Your Online Template Site

    Online template systems are found online, and, typically offer you an "easy" and inexpensive way to build your website. Notice the quotations about easy. The actual process of building and maintaining the online template site might not feel like it's easy and inexpensive. The reality is that, in most cases, it really is easier to use an online template system than it is to start a new website from scratch. Also, you can get a site up much more quickly because the internal structure and background images of the site are already done. However, you should choose a developer that has some experience in this area.

    Read the article

  • Webmaster Tools word count

    - by Henrik Erlandsson
    Is there a way to somehow verify that the googlebot finds the headings and the content, for example by word count? I'm asking this because I tried a program called Screaming Frog, which fails to even fetch the first h1 on a validated page - for about 1/3 of all the pages(!) - and got insecure. Even though the site looks hunky dory in Webmaster Tools, I'd like to know what a googlebot-like content crawler finds on my page and in what order. Any tips on such tools is appreciated. This is not about keyword count.

    Read the article

  • Very High CPU usage (100%) from just browsing the Web

    - by cole
    I tested on Firefox and Chromioum. Im at 100% while loading pages which causes them to load slow and when I dont have a application running Im at 40% CPU (At least) Everything is slow basically. Im also already on Ubuntu Classic so im not using Unity. Should I go to 10.04? is that more stable? On windows this wasnt an issue. I have a Dual Boot with XP and a 2.4Ghz Intel Celeron with 768MB RAM and an Nvidia 6200 Graphics card. I heard 10.04 was the most stable. any suggestions?

    Read the article

  • Creating an encrypted, web-based proxy

    - by Jason
    I have moved to Asia where my internet connection is censored and I'd like to check my messages from social sites which happen to be blocked. As virtually all proxy servers are blocked in this country, I've decided to attempt to roll my own encrypted proxy server. Please note, the key word here is encrypted—if the sniffer sees anything like f@c3b00k or w:k:p3d:ia travelling down the wire I'm had. I have a website hosted with GoDaddy (Windows with PHP 5.2 & IIS 7). Is there any way I can set up an encrypted proxy through this service? If so, how, and what open source tools are available to use?

    Read the article

  • How to get my laptop to detect SD cards inserted into its built-in card reader?

    - by Candelight
    My laptop has a built-in SD Card reader but it cannot detect one when inserted. Here is the result when I type lspci from the terminal: 00:00.0 Host bridge: ATI Technologies Inc RS480 Host Bridge (rev 10) 00:01.0 PCI bridge: ATI Technologies Inc RS480 PCI Bridge 00:06.0 PCI bridge: ATI Technologies Inc RS480 PCI Bridge 00:07.0 PCI bridge: ATI Technologies Inc RS480 PCI Bridge 00:12.0 IDE interface: ATI Technologies Inc IXP SB400 Serial ATA Controller (rev 80) 00:13.0 USB Controller: ATI Technologies Inc IXP SB400 USB Host Controller (rev 80) 00:13.1 USB Controller: ATI Technologies Inc IXP SB400 USB Host Controller (rev 80) 00:13.2 USB Controller: ATI Technologies Inc IXP SB400 USB2 Host Controller (rev 80) 00:14.0 SMBus: ATI Technologies Inc IXP SB400 SMBus Controller (rev 83) 00:14.1 IDE interface: ATI Technologies Inc IXP SB400 IDE Controller (rev 80) 00:14.2 Audio device: ATI Technologies Inc IXP SB4x0 High Definition Audio Controller (rev 01) 00:14.3 ISA bridge: ATI Technologies Inc IXP SB400 PCI-ISA Bridge (rev 80) 00:14.4 PCI bridge: ATI Technologies Inc IXP SB400 PCI-PCI Bridge (rev 80) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 01:05.0 VGA compatible controller: ATI Technologies Inc RS482 [Radeon Xpress 200M] 04:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 01) 05:04.0 FireWire (IEEE 1394): O2 Micro, Inc. Firewire (IEEE 1394) (rev 02) 05:04.2 SD Host controller: O2 Micro, Inc. Integrated MMC/SD Controller (rev 01) 05:04.3 Mass storage controller: O2 Micro, Inc. Integrated MS/xD Controller (rev 01) 05:09.0 Network controller: Ralink corp. RT2561/RT61 rev B 802.11g

    Read the article

  • Page load speeds effect on crawl rate

    - by Sam Pegler
    We've noticed a big drop in the total pages crawled per day on our site, we have no control over the crawl rate in google webmaster tools so it's possible this has been changed by google. However it's a fairly large site and I wouldn't of thought that the crawl rate would've been decreased. What we have noticed though is a sizeable increase in page load times, in my mind this would be the cause. Can anyone else confirm if the crawl rate is directly correlated to page load time? Seems logical, longer page load time, less pages crawled. Any decent documentation on this would be appreciated, I don't normally have any input on SEO so this is new to me.

    Read the article

  • Ubuntu's New Web Office Integration

    <b>LinuxUK:</b> "Take for instance a low powered, possibly mobile/embedded system with limited processing power and memory. A cloud based service for these devices could allow resource intensive tasks to be offloaded to an online server somewhere, greatly improving the UX"

    Read the article

  • WHMCS Fatal error: Out of memory while View Invoice PDF

    - by prakash
    I can log into WHMCS & can access everything I should be able to access, but if i try to click View PDF Invoice, the following error will occur, Fatal error: Out of memory (allocated 67633152) (tried to allocate 76 bytes) in /home/xxxx/public_html/whmcs/includes/classes/class.tcpdf.php on line 8419 I have already set the allocated Memory limit to 256MB, but the error still occurs. At that time of the error, the process memory is exceeding the allocation I set. I checked log file, and found the following errors: #2 /home/xxxxx/public_html/client/includes/classes/class.tcpdf.php(8453): TCPDF->Image('/home/xxxxx/...', 20, 25, 75, 17.5816023739, 'PNG', '', '', false, 300, '', false, 8) #3 /home/xxxxx/public_html/client/includes/classes/class.tcpdf.php(7881): TCPDF->ImagePngAlpha('/home/xxxxx/...', 20, 25, 337, 79, 75, 17.5816023739, 'PNG', '', '', false, 300, '', NULL) While I was investigating the issue above I also noticed the error condition pictured below:

    Read the article

  • Using another domain with Google App Engine

    - by gsingh2011
    I'm trying to change my google app engine domain (domain.appspot.com) to the domain I bought from 1&1.com (mydomain.com). I went into the google app engine settings and added the domain. After making a Google Apps account, I was asked to verify my domain. The directions say that 1&1 doesn't allow me to create TXT records, so I can't use that method for verification. Their alternative is to upload an HTML file to my server, but I didn't buy hosting with my domain, I just bought the domain. My files are on domain.appspot.com. How can I make mydomain.com point to domain.appspot.com? I've added the ns1.googleghs.com as my nameservers in my 1&1 DNS settings, but I still can't verify my domain with Google Apps.

    Read the article

  • Image Collector Rips Web Page Images to Your Dropbox Account

    - by Jason Fitzpatrick
    Chrome: Image Collector is a simple Chrome extension that rips the images on the page you’re visiting to your Dropbox (or Google Drive) accounts. Just click the icon, uncheck any images you don’t want it to download, and click save. You can, technically, modify the script to download the images directly to your hard drive, but modifying it was a bit of a hassle and the default save-to-Dropbox action is so smooth we saw little reason to do so. Hit up the link below to grab a free copy. Image Collector [via Freeware Genuis] How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It? HTG Explains: What Is Windows RT and What Does It Mean To Me?

    Read the article

  • Is it better to define all routes in the Global.asax than to define separately in the areas?

    - by Matthew Patrick Cashatt
    I am working on a MVC 4 project that will serve as an API layer of a larger application. The developers that came before me set up separate Areas to separate different API requests (i.e Search, Customers, Products, and so forth). I am noticing that each Area has separate Area registration classes that define routes for that area. However, the routes defined are not area-specific (i.e. {controller}/{action}/{id} might be defined redundantly in a couple of areas). My instinct would be to move all of these route definitions to a common place like the Global.asax to avoid redundancy and collisions, but I am not sure if I am correct about that.

    Read the article

  • Will many links to the same page without nofollow penalize the host site in the search engine rankings?

    - by Evgeny
    May be a silly question, but I'll give it a shot :). On my forum app I would like to allow users with sufficiently high reputation display links to their home pages under every post - without the nofollow attribute (while lower rep users will have the nofollow) I am happy to help the site contributors improve rankings of their own, but not sure if this can actually deteriorate the rank of the host (the site that hosts those links) - as potentially the same link to the user's home page may be peppered in the pages of the host. What do you think? Thanks.

    Read the article

  • How do I add a WMware ESXi Host to Microsoft Virtual Machine Manager?

    - by user63250
    I am trying to manage virtual machines running on a VMware ESXi host using Microsoft System Center Virtual Machine Manager. I was able to add the ESXi machine using the "Add VMware VirtualCenter server" option, but can't access any of the VMs on the datastore associated with this ESXi server. The datastore of the ESXi box is showing up with the correct name, but it won't let me see any of the VMs that have already been created; I get "There are no virtual machines on this host." Because I couldn't get any of the existing virtual machines to show up, I tried creating some new ones. When using VMM to connect to ESXi and create new VMs, I get the following error messages in the "rating explanation" section: The virtualization software on the selected host does not support virtual hard disks on an IDE bus. and The virtualization software on the host XXXXXX does not support the creation of dynamic virtual hard disk. Any ideas on why I can't manage existing machines and why I can't create new ones? The existing machines were created in vSphere. I should note that the ESXi server and the server running SCVMM are both on the same domain. I should also note that although the ESXi box has been added as a VirtualCetner server, when I try to add it through the "Add Host" option, I get an error message saying "Virtual Machine Manager cannot complete the VirtualCenter action on server EXSi because of the following error: The operation is not supported on the object."

    Read the article

  • Jquery Carousel - Need A Script Hack to Customize

    - by Leah
    I hope someone can help me out here. I am coding a site for a client using the carouFredsel. I am using this because it allows for variable widths with in the slideshow as seen here: http://2938.sandbox.i3dthemes.net/index-old.html. My problems are as follows: to use the built in auto center script the scrolling changes the white space in between images during the transition to fit the width of the wrapper. I need a hack to keep the white space during transition the same, like this: http://2938.sandbox.i3dthemes.net/index.html. Also, I can't figure out how to put this snippet into my code and make it work scroll: { onAfter: function() { if ( $(this).triggerHandler( "currentPosition" ) == 0 ) { $(this).trigger( "pause" ); } } }

    Read the article

  • How to parse JSON data from web more faster [closed]

    - by Kaidul Islam Sazal
    I have json inventory inventory.json on the server like this: [ { "body" : "SUV", "color" : { "ext" : "White diamond pearl", "int" : "Taupe" }, "id" : "276181", "make" : "Acura", "miles" : 35949, "model" : "RDX", "pic" : [ { "full" : "http://images1.dealercp.com/90961/000JNBD/001_0292.jpg" } ], "power" : { "drive" : "Front wheel drive", "eng" : "2.3L DOHC PGM-FI 16-VALVE", "trans" : "Automatic" }, "price" : { "net" : 29488 }, "stock" : "6942", "trim" : "AWD 4dr Tech Pkg SUV", "vin" : "5J8TB2H53BA000334", "year" : 2011 }, { "body" : "Sedan", "color" : { "ext" : "Premium white pearl", "int" : "Taupe" }, "id" : "275622", "make" : "Acura", "miles" : 40923, "model" : "TSX", "pic" : [ { "full" : "http://images1.dealercp.com/90961/000JMC6/001_1765.jpg" } ], "power" : { "drive" : "Front wheel drive", "eng" : "2.4L L4 MPI DOHC 16V", "trans" : "Automatic" }, "price" : { "net" : 22288 }, "stock" : "6945", "trim" : "4dr Sdn I4 Auto Sedan", "vin" : "JH4CU2F66AC011933", "year" : 2010 } ] here are two index, There are almost 5000 index like this. I parsed this json like this: var url = "inventory/inventory.json"; $.getJSON(url, function(data){ $.each(data, function(index, item){ //straight-forward loop if(item.year == 2012) { $('#desc').append(item.make + ' ' + item.model + ' ' + '<br/>' + item.price.net + '<br/>' + item.pic[0].full); } }); }); This is working fine.But the problem is that, this searching and fetching process is little bit slow as there are 5000 indexes already and it's increasing day by day. It seems that, it is a straight-forward loop to parse the data and a normal brute-force method. Now I want to know if there any time efiicient way to parse more faster.Any faster method to parse instead of straight-forward loop ?

    Read the article

  • What's the best practice to do SOA exception handling?

    - by sun1991
    Here's some interesting debate going on between me and my colleague when coming to handle SOA exceptions: On one side, I support what Juval Lowy said in Programming WCF Services 3rd Edition: As stated at the beginning of this chapter, it is a common illusion that clients care about errors or have anything meaningful to do when they occur. Any attempt to bake such capabilities into the client creates an inordinate degree of coupling between the client and the object, raising serious design questions. How could the client possibly know more about the error than the service, unless it is tightly coupled to it? What if the error originated several layers below the service—should the client be coupled to those lowlevel layers? Should the client try the call again? How often and how frequently? Should the client inform the user of the error? Is there a user? By having all service exceptions be indistinguishable from one another, WCF decouples the client from the service. The less the client knows about what happened on the service side, the more decoupled the interaction will be. On the other side, here's what my colleague suggest: I believe it’s simply incorrect, as it does not align with best practices in building a service oriented architecture and it ignores the general idea that there are problems that users are able to recover from, such as not keying a value correctly. If we considered only systems exceptions, perhaps this idea holds, but systems exceptions are only part of the exception domain. User recoverable exceptions are the other part of the domain and are likely to happen on a regular basis. I believe the correct way to build a service oriented architecture is to map user recoverable situations to checked exceptions, then to marshall each checked exception back to the client as a unique exception that client application programmers are able to handle appropriately. Marshall all runtime exceptions back to the client as a system exception, along with the stack trace so that it is easy to troubleshoot the root cause. I'd like to know what you think about this? Thank you.

    Read the article

  • Recording slow web stream

    - by Budric
    I'm trying to record an mpeg2 video stream from a website that doesn't have the greatest bandwidth. The video often buffers. I want to download the stream and watch it offline. The extract stream format received is: Stream #0.0[0x44]: Audio: mp2, 48000 Hz, stereo, s16, 192 kb/s Stream #0.1[0x45]: Video: mpeg2video (Main), yuv420p, 704x576 [PAR 16:11 DAR 16:9], 15000 kb/s, 27.19 fps, 25 tbr, 90k tbn, 50 tbc I use the following tool to transocde the stream: ffmpeg -i "http://url" -y -vcodec libx264 -b 3000k -acodec copy /tmp/stream.mp4 Unfortunately after a few seconds ffmpeg stops recording with an error [mpegts @ 0x1f0b9c0] PES packet size mismatch [mp2 @ 0x1f14640] incomplete frame Error while decoding stream #0.0 [mpeg2video @ 0x1f16860] ac-tex damaged at 0 26 [mpeg2video @ 0x1f16860] Warning MVs not available I've tried encoding with vlc as well with similar issues. Although vlc doesn't stop encoding, the output video has regions where it hangs. vlc -I dummy "http://url" --network-caching="1000" --sout="#transcode{vcodec=h264,vb=3000,acodec=mp3,ab=192}:std{access=file,mux=mp4,dst=/tmp/stream.mp4}" [mpeg2video @ 0x7f2d4c001e20] ac-tex damaged at 9 33 [mpeg2video @ 0x7f2d4c001e20] Warning MVs not available [mpeg2video @ 0x7f2d4c001e20] concealing 132 DC, 132 AC, 132 MV errors [mpeg2video @ 0x7f2d4c001e20] ac-tex damaged at 16 17 [mpeg2video @ 0x7f2d4c001e20] Warning MVs not available [mpeg2video @ 0x7f2d4c001e20] concealing 836 DC, 836 AC, 836 MV errors libdvbpsi error (PSI decoder): TS discontinuity (received 4, expected 3) for PID 0 I also tried flv transcoding and it shows up with its own set of issues, like output flv file hangs in certain parts. Anyone know what's wrong or how to fix this?

    Read the article

< Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >