Search Results

Search found 37166 results on 1487 pages for 'google website optimizer'.

Page 322/1487 | < Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >

  • What WYSIWYG software can I use to create a web page?

    - by Roman
    I always made web pages by writing HTML code but now I would like to try to use some WYSIWYG approach. Can anybody recommend me a program which I can use for that? I mean a program in which you can move buttons, tables, pictures by mouse. You can change size and shape by mouse. You can use nice templates for "block of text", buttons, background and so on. I am using Windows 7. May be I already have something pre-installed?

    Read the article

  • Xpath automatization software

    - by holms
    Too sad this topic was closed. But I'm kind of a having the same question. I want to construct xpathes, for common html block which appears on page. For example: you can give two URLs to that software, which will contain SAME html blocks (divs) , but having different content in it. by giving 2 stackoverflow.com url's, software could detect that same div#id is being used once again, and just give XPATH'es of those html blocks like for example. Of course I can find xpath'es my self, as far as I remember, firebug makes it easy,shows xpath of every html block, but this is kind of hard procedure if you want to get xpath'es for LOTS of html elements. so that's why I want this kind of software to help in this routine.

    Read the article

  • Block a URL at browser level

    - by Farseeker
    Does anyone have a solution (that doesn't involve editing the hosts file) to block a particular URL from FireFox? Basic back story is that I'm trying to discipline myself. I'm spending FAR too much time over at Server Fault that I want to genuinely block the site from my work PC so that every time I find myself flicking to it during work time I can't see it, but I'd like to be able to disable it during my lunch break. (So I only spend 40 minutes a day there, rather than 4 hours). That said I don't want to block it at the router, nor for anyone else.

    Read the article

  • Client certificate based encryption

    - by Timo Willemsen
    I have a question about security of a file on a webserver. I have a file on my webserver which is used by my webapplication. It's a bitcoin wallet. Essentially it's a file with a private key in it used to decrypt messages. Now, my webapplication uses the file, because it's used to recieve transactions made trough the bitcoin network. I was looking into ways to secure it. Obviously if someone has root access to the server, he can do the same as my application. However, I need to find a way to encrypt it. I was thinking of something like this, but I have no clue if this is actually going to work: Client logs in with some sort of client certificate. Webapplication creates a wallet file. Webapplication encrypts file with client certificate. If the application wants to access the file, it has to use the client certificate. So basically, if someone gets root access to the site, they cannot access the wallet. Is this possible and does anyone know about an implementation of this? Are there any problems with this? And how safe would this be?

    Read the article

  • Command-line HTTP crawler for Windows?

    - by Pekka
    Would somebody have a recommendation for a web site crawler that can be invoked and equipped with settings from the command line? This would need to run in a Windows environment. Saving the data, following stylesheet links etc. is not an issue. I only need the crawler to start with a page, parse it, and follow all the links on the same domain so that in the end, all pages on the site have been requested once. Background: I'm setting up a web site that gets frequently uploaded from an office location. Combining data from various sources, it has several levels of caching. I don't want the first user to visit the site after a fresh upload to have to wait until the page has been generated and saved in the cache.

    Read the article

  • My DNS cannot resolve an web site address?

    - by ipkiss
    Hello all, Recently, I could not access the webpage bbc.co.uk anymore, while I can access other websites smoothly. Ar first, I though there may be some problem with my laptop. However, if I use my laptop through my company network, I can load the page bbc.co.uk normally. Then, I though maybe my ADSL at home blocks that web address. However, I tried another laptop with my home ADSL and it can load the page bbc.co.uk very fast. Now I do not know what could be the problem. Can anyone tell me please? Thank you.

    Read the article

  • Web application/ site service (like Google App Engine) for PHP/ MySQL and Postgres

    - by Simon
    I would like to find a service similar to Google App Engine for PHP/ MySQL/ Postgres sites/ applications. We host two different types of site. i). PHP/ Mysql/ Zend Framework <VirtualHost *:80> DocumentRoot "/home/websites/website.com/public" ServerName website.com # This should be omitted in the production environment SetEnv APPLICATION_ENV development <Directory "/home/websites/website.com/public"> Options Indexes MultiViews FollowSymLinks AllowOverride All Order allow,deny Allow from all RewriteEngine On RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ index.php [NC,L] </Directory> </VirtualHost> ii). Matrix CMS - PHP/ Postgres + loads of pear classes <VirtualHost *:80> ServerName server.example.com DocumentRoot /home/websites/mysource_matrix/core/web Options -Indexes FollowSymLinks <Directory /home/websites/mysource_matrix> Order deny,allow Deny from all </Directory> <DirectoryMatch "^/home/websites/mysource_matrix/(core/(web|lib)|data/public|fudge)"> Order allow,deny Allow from all </DirectoryMatch> <DirectoryMatch "^/home/websites/mysource_matrix/data/public/assets"> php_flag engine off </DirectoryMatch> <FilesMatch "\.inc$"> Order allow,deny Deny from all </FilesMatch> <LocationMatch "/(CVS|\.FFV)/"> Order allow,deny Deny from all </LocationMatch> Alias /__fudge /home/websites/mysource_matrix/fudge Alias /__data /home/websites/mysource_matrix/data/public Alias /__lib /home/websites/mysource_matrix/core/lib Alias / /home/websites/mysource_matrix/core/web/index.php/ </VirtualHost> My key requirements are: I don't want to worry/ know/ care about the server/ infrastructure Secure/ up to date software/ os Good monitoring Automatic scalability SLA I apologise for the length of the question. In short all I want to do is i). create vhost, ii). create db iii). install app/ site iv). relax. Thanks. Edit: I include the Matrix vhost because that is the only complication that I cannot really do via a .htaccess file.

    Read the article

  • Unable to launch google.com inside Linux Ubuntu

    - by Anuradha
    I have installed Ubuntu Linux VM in my Win XP box. I am able to open http://google.com on Win XP but when I login to, ubuntu linux.. and launch this site: I am getting an error: Server not found. The network settings I have on LinuxUbuntu VM is : Adapter1 : attached to Bridge adapter. I tried NAT as well. But nothing seems to work. I am not in China. I provide google.com as mere example. We have a test website which cannot be launched inside Linux Ubuntu.

    Read the article

  • wget not converting links

    - by acrosman
    I am trying to mirror a fairly large site (20,000+ pages) prior to a major overhaul. Basically, I need a backup before cutting over to the new one in case we forgot something we need (we'll have about 1,000 pages at launch). The site is run on a CMS that I cannot easily extract usable data from, so I'm trying to make the copy with wget. My problem is that wget does not appear to be actually converting links, despite the presence of --convert-links or -k in the command. I've tried a couple of different combinations of flags, but I haven't been able to get the output I need. Most recent failed attempt was: nohup wget --mirror -k -l10 -PafscSnapshot --html-extension -R *calendar* -o wget.log http://www.example.org & I've also included the --backup-converted, and --convert-links instead of -k (not that it have mattered). I've done it with and without -P and -l, again no that they should matter. Results in files that still have links like: http://www.example.org/ht/d/sp/i/17770

    Read the article

  • public family tree

    - by Remus Rigo
    Hi all Does anyone know a ancestry site that allows you to create a public profile or tree, so that other visitors can see your family tree. In all sites that I have found (dynastree.com, familylink.com, ancestry.com, genebase.com), if someone wants to see your family tree, they must be members or register. thanks

    Read the article

  • Multiple personalities for a blog

    - by Ralph Rickenbach
    I am using Blogger.com and service multiple websites. I would like to display the content of one single blog on all sites, using url's like blog.sitename.xxx and the sites corporate identity. They are rather different, but a solution that takes specific css would suffice as an absolute minimum. Better would be to have different templates. Any solution?

    Read the article

  • Experiences with Google TiSP?

    - by Zypher
    i got an email from google a couple of hours ago (around 12AM EST today) that Google's TiSP service is now available in my area. this seems like a great deal compared to my cUrrent 16Mbps cable coNction at work, however i'm a lIttle nervous about the fact that linux support is "Coming soon". i was wOndering if anyone had successfully installed this system and gotten it woRking with their linux infrastructure? I'm assuming that there shouldn't be any issues siNce we have an ASA in front of our internet. TiSP Shouldn't care what is behind that. Any insight would be greatly appreciated!

    Read the article

  • DNS configuration for external/internal resolution

    - by FerranB
    HI, We have an internal web server which is available through Internet and from local network. The server is located in the local network. The current configuration is the following: To access through Internet you use http://webexample.com To access through Local Network you use http://myweb The main problem is that the local users cannot share links with external users. Thats a problem for us. I want to setup the following configuration: All users (local and Internet) access through http://webexample.com The local DNS server resolves http://webexample.com to the local Network IP (i.e 192.168.2.100) Any other suggestion? Which is the best way to override http://webexample.com resolution in Windows Server? It can be done on DNS server or it have to be done in hosts file?

    Read the article

  • Caching Reverse-Proxy ISP Host for a Low-Bandwidth Server

    - by Casey
    I am building a webcam w/ HTTP server that will be running from a low-bandwith connection. The content on the site will be changing every 5 to 10 minutes. Instead of serving files directly from this connection, are there hosting companies that can act as a reverse proxy for my site? Therefore, if nobody is using the site, the local internet connection remains idle. And if I receive 1000 hits all at the same time, only one HTTP GET is required, and the hosting company (on a fat pipe) continues serving the other 999 requests? This doesn't sound like a very common usage model, but I feel like this would be the optimal solution to my situation.

    Read the article

  • Unable to login through varnish cache

    - by ArunS
    I am setting up Active Collab Site in my new server. The setup is like below Internet --- varnish ---- apache But i am not able to login to the site through varnish cache.. But i can login to site through apache. Here is my VCL file backend default { .host = "localhost"; .port = "8080"; } acl purge { "localhost"; } sub vcl_recv { if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } return(lookup); } if (req.url ~ "^/$") { unset req.http.cookie; } } sub vcl_hit { if (req.request == "PURGE") { set obj.ttl = 0s; error 200 "Purged."; } } sub vcl_miss { if (req.request == "PURGE") { error 404 "Not in cache."; } if (!(req.url ~ "wp-(login|admin)")) { unset req.http.cookie; } if (req.url ~ "^/[^?]+.(jpeg|jpg|png|gif|ico|js|css|txt|gz|zip|lzma|bz2|tgz|tbz|html|htm)(\?.|)$") { unset req.http.cookie; set req.url = regsub(req.url, "\?.$", ""); } if (req.url ~ "^/$") { unset req.http.cookie; } } sub vcl_fetch { if (req.url ~ "^/$") { unset beresp.http.set-cookie; } if (!(req.url ~ "wp-(login|admin)")) { unset beresp.http.set-cookie; }} When i try to login through varnish i was redirect back to login page. If i enter wrong password, then it will ask for enter correct password.

    Read the article

  • Looking for alternative to NetNewsWire and Google Reader

    - by Janine Sisk
    I have used NetNewsWire on both Mac and iPhone for years, and have been reasonably happy. Unfortunately the latest version of NNW for Mac, which I just upgraded to, now matches Google Reader's policy of marking items read after 30 days. The older version I had been using did not do this. I regularly keep unread items for way longer than 30 days, and do eventually read them, so this is a major bummer for me. The iPhone version has behaved this way forever, but I was ok with that as long as I had the full list on my desktop. Keeping in sync between the iPhone and Mac is very important, so I need to find another online solution with clients available on both sides. I've done a little digging but haven't found much; it seems like Google has pretty much taken over this space. Any suggestions?

    Read the article

  • Unable to access certain websites from a computer

    - by matt74tm
    One of the desktop computers in my office is unable to access some particular websites. We've tried from Chrome, IE, Firefox, but no luck. eg: http://spsims.wto.org/ -> click on "Regular notifications" On the affected computer, every browser times out after the click. Whereas it should redirect the user to http://spsims.wto.org/web/pages/search/notification/regular/Search.aspx How can I diagnose this further? This is a Windows XP machine.

    Read the article

  • Obtaining list of files from php script?

    - by SenorSputnik
    Naenara/KCCKP offers a catalog of hundreds of mp3 files that can only be downloaded in small amounts at a time. Clicking on a song title invokes mp3player.php and downloads a plaintext link to the mp3 file: http://www.kcckp.net/mp3player.php?e+8 Going directly to mp3player.php will display Korean error messages and send you back 1 page in your history. Is there any way to parse/coax mp3player.php into dumping a full list of mp3 links? I am sorry if this is a painfully easy or impossible task, I have not even begun to delve into php. </newbiewhining>

    Read the article

  • Lightning fast forum based around metadata / tags? [closed]

    - by Dan W
    I wonder if anything like this exists. I'd like to add a forum to my site, but instead of the usual forum/subforum/sub-subforum structure, I'd like to use a metadata/tag approach where everything exists as a single directory, and where there's a search field at the top which instantly (<0.5 sec) filters the threads to a particular keyword or keywords. Also, as the admin, I would be able to add highly visible buttons at the top, which can be clicked on for the main categories I choose for the forum (nevertheless, users can also add tags to their own threads outside of these default main tags I supply if they wish). This approach, if done properly, is more powerful, efficient, maintenance free, scalable and friendly than a standard forum, so I was hoping someone had the same idea and made something out of it. It couldn't be that hard. I'd want the speed to be up to (or near) the standard of this: http://forum.dlang.org/ Other forums (e.g.: phpBB, shudder) are orders of magnitude worse than that in terms of latency (posting or browsing), and I think that is wrong, even in principle ;)

    Read the article

< Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >