Search Results

Search found 11114 results on 445 pages for 'dynamic websites'.

Page 284/445 | < Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >

  • Firefox generating error?

    - by Lynda
    I run Firebug on my computer since I develop websites. And I have been noticing this error consistently with every page I go to and I am lost as to what it is and believe it might be Firefox causing this error. Has anyone seen it before? Here is the error: An exception occurred. Traceback (most recent call last): File "resource://jid1-g0j5yenav9jwla-at-jetpack-api-utils-lib/tabs/tab.js", line 254, in null .getInterface(Ci.nsIWebNavigation) Error: Permission denied for <http://superuser.com> to create wrapper for object of class UnnamedClass

    Read the article

  • OpenVPN IPv6 over IPv4 tunnel

    - by user66779
    Today I installed OpenVPN 2.3rc2 on both my windows 7 client machine and centos 6 server. This new version of OpenVPN provides full compatibility for IPv6. The Problem: I am currently able to connect to the server (through the IPv4 tunnel) and ping the IPv6 address which is assigned to my client and I can also ping the tun0 interface on the server. However, I cannot browse to any IPv6 websites. My vps provider has given me this: 2607:f840:0044:0022:0000:0000:0000:0000/64 is routed to this server (2607:f840:0:3f:0:0:0:eda). This is ifconfig after setup with OpenVPN running: eth0 Link encap:Ethernet HWaddr 00:16:3E:12:77:54 inet addr:208.111.39.160 Bcast:208.111.39.255 Mask:255.255.255.0 inet6 addr: 2607:f740:0:3f::eda/64 Scope:Global inet6 addr: fe80::216:3eff:fe12:7754/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2317253 errors:0 dropped:7263 overruns:0 frame:0 TX packets:1977414 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1696120096 (1.5 GiB) TX bytes:1735352992 (1.6 GiB) Interrupt:29 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.8.0.1 P-t-P:10.8.0.2 Mask:255.255.255.255 inet6 addr: 2607:f740:44:22::1/64 Scope:Global UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:739567 errors:0 dropped:0 overruns:0 frame:0 TX packets:1218240 errors:0 dropped:1542 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:46512557 (44.3 MiB) TX bytes:1559930874 (1.4 GiB) So OpenVPN is sucessfully creating a tun0 interface and assigning clients IPv6 addresses using 2607:f840:44:22::/64. The first client to connect is getting 2607:f840:44:22::1000 and the second 2607:f840:44:22::1001, and so on... plus 1 each time. After connecting as the first client, I can ping from my windows client machine 2607:f740:44:22::1 and 2607:f740:44:22::1000. However, I have no access to IPv6 websites. I believe the problem is that the tun0 IPv6 addressees are not being forwarded to the eth0 interface. This is the firewall running on the server: #!/bin/sh # # iptables configuration script # # Flush all current rules from iptables # iptables -F iptables -t nat -F # # Allow SSH connections on tcp port 22 # iptables -A INPUT -i eth0 -p tcp --dport 22 -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 22 -j ACCEPT # # Set access for localhost # iptables -A INPUT -i lo -j ACCEPT # # Accept connections on 1195 for vpn access from client # iptables -A INPUT -i eth0 -p udp --dport 1195 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p udp --sport 1195 -m state --state ESTABLISHED -j ACCEPT # # Apply forwarding for OpenVPN Tunneling # iptables -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT iptables -A FORWARD -s 10.8.0.0/24 -j ACCEPT iptables -t nat -A POSTROUTING -o eth0 -j SNAT --to 209.111.39.160 iptables -A FORWARD -j REJECT # # Enable forwarding # echo 1 > /proc/sys/net/ipv4/ip_forward # # Set default policies for INPUT, FORWARD and OUTPUT chains # iptables -P INPUT ACCEPT iptables -P FORWARD ACCEPT iptables -P OUTPUT ACCEPT # # IPv6 # IP6TABLES=/sbin/ip6tables $IP6TABLES -F INPUT $IP6TABLES -F FORWARD $IP6TABLES -F OUTPUT echo -n "1" >/proc/sys/net/ipv6/conf/all/forwarding echo -n "1" >/proc/sys/net/ipv6/conf/all/proxy_ndp echo -n "0" >/proc/sys/net/ipv6/conf/all/autoconf echo -n "0" >/proc/sys/net/ipv6/conf/all/accept_ra $IP6TABLES -A INPUT -i eth0 -m state --state ESTABLISHED,RELATED -j ACCEPT $IP6TABLES -A INPUT -i eth0 -p tcp --dport 22 -j ACCEPT $IP6TABLES -A INPUT -i eth0 -p icmpv6 -j ACCEPT $IP6TABLES -P INPUT ACCEPT $IP6TABLES -P FORWARD ACCEPT $IP6TABLES -P OUTPUT ACCEPT Server.conf: server-ipv6 2607:f840:44:22::/64 server 10.8.0.0 255.255.255.0 port 1195 proto udp dev tun ca ca.crt cert server.crt key server.key dh dh2048.pem ifconfig-pool-persist ipp.txt push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 208.67.222.222" push "dhcp-option DNS 208.67.220.220" keepalive 10 60 tls-auth ta.key 0 cipher AES-256-CBC comp-lzo user nobody group nobody persist-key persist-tun status openvpn-status.log log-append openvpn.log verb 5 Client.conf: client dev tun nobind keepalive 10 60 hand-window 15 remote 209.111.39.160 1195 udp persist-key persist-tun ca ca.crt key client1.key cert client1.crt remote-cert-tls server tls-auth ta.key 1 comp-lzo verb 3 cipher AES-256-CBC I'm not sure where I am going wrong, it could be the firewall, or something missing from server or client.conf. This version of OpenVPN was only released yesterday, and there's little info on the internet about how to setup an IPv6 over IPv4 vpn tunnel. I've read the manual for this new version of OpenVPN (parts pertaining to IPv6) and it provides very little info too. Thanks for any help.

    Read the article

  • Trojan infection help please

    - by brandon
    Hey, I was browsing some websites and somehow obtained a trojan through some sort of silent download. Google Chrome started acting funny and wouldn't load web pages and neither would internet explorer. Only Firefox worked. I rebooted my computer and as usual logged into my email account as well as my bank account online completely forgetting about the infection. Could my information have been sent to the person or people or wrote the trojan? I downloaded Zone Alarm and took care of the issue, I'm just worried about when I absentmindedly logged into my email account and bank account online while I was infected.

    Read the article

  • How to make a server check it's own availability on the web?

    - by Javawag
    Hi all, Just a quick question – my server is running at my house serving www pages at www.javawag.com. The problem is that my home internet connection keeps dropping randomly - for about 10 mins at a time. This is only an intermittent problem and will go away soon I hope. However, my server doesn't recover properly - when the connection comes back, I can still access it at 192.168.0.8 (locally) without any issue, but at www.javawag.com there's no reply! (Just an aside - my home internet connection is dynamic ISP, the domain www.javawag.com points to javawag.dyndns.org which in turn points to my IP, updated every minute by ddclient on the server) Is there some way for the server to check if it's accessible from the outside world periodically, and if not restart Apache/reboot? Oh, and if I reboot the problem fixes itself also! Javawag

    Read the article

  • Psexec issue when running an application on a Windows Server 2008 R2 machine from a 2003 R2 machine

    - by Vermin
    I am trying to run an application on a Windows Server 2008 R2 machine from a Windows Server 2003 R2 machine using a batch file with the following line of code in a batch file: psexec \\nightmachine -u DOMAIN\User -p Password -i "C:\FilePath\Application.exe" argument1 argument2 The application fails to run correctly when started using psexec, but the application will run correctly if I have logged into the nightmachine with the same user and started it from its file path via cmd. I have been able to get hold of the error returned in the application from its log and the exception returned is the following: System.DllNotFoundException: Unable to load DLL 'rasapi32.dll': A dynamic link library (DLL) initialization routine failed. (Exception from HRESULT: 0x8007045A) After searching for that error code on the net, there are a lot of posts saying that this is caused by file corruption, but I cant see why that would be the case as the application will run normally when not being run from psexec. (the user is an administrator on both machines) Can anyone please help me on this? If any more information is needed to help solve this issue then please ask and I will do my best to post it.

    Read the article

  • Typical Service Response Time for software verndors [closed]

    - by Miky D
    I'm trying to find out what are the standard service/tech-support response times that are expected of a software vendor. We're being asked by a customer to enter into an agreement regarding technical support for a software application that we're selling. Basically, I'm interested in the typical turn-around time (i.e. time to respond, time to resolution) based on the severity of the issue. And also, I'm interested in the financial structure of such agreements: i.e. charge/incident, bundle with unlimited incidents/customer etc. Any information or suggestions of where to find such information (even examples of other software vendors websites) would be greatly appreciated!

    Read the article

  • Cheapest High Available Web Server [closed]

    - by xyz
    I would like to create a high-available setup (e.g. a small cluster) for a webserver, i.e. it will run Apache, PHP and MySQL. There will be between 2-8 small websites running with only very little traffic and workload. High availability is however very important. I don't want to be dependent on 1 datacenter, so there must be a minimum of 2 servers placed in different datacenters, and if one server goes down, the user must experience no or only a minimum of downtime - and no data loss. I have considered Amazon AWS using their Elastic Load Balancing, since it is possible to buy 2 EC2 instances in 2 availability zones and set up load balancing and RDS (Multi-AZ). However this seems rather expensive. Using the AWS price calculator http://calculator.s3.amazonaws.com/calc5.html it totals to 185$/month the first year (including the free tier). Are my calculations incorrect or is there a cheaper way to make this HA setup? Best regards

    Read the article

  • Drive stopped working on windows server 2003 and I receive a "controller error"

    - by Durden81
    I can access the server in safe mode. I have a Proliant 360 Hp server with Windows server 2003 R2. The event viewer is completely filled up with this error: the driver detected a controller error on Device\Harddisk3\DR3 I individuated the drive affected. It is drive H that is a secondary non mirrored drive. When I access anything on that drive I receive: "the request could not be performed because of an I/O device error" What should I do? Is this just a driver issue or a hard drive failure? Please give me a quick help as my websites are offline due to this. Any suggestion is welcome!

    Read the article

  • Is there a log showing why a Windows server did not restart SQL Server after a reboot?

    - by MerlinMags
    Our server was rebooted after a Windows Update scheduled for 1am, but after the restart SQL Server did not start up, so our websites were unable to display. Usually this process happens with no manual intervention. Is there a log somewhere which might indicate the reason why the Windows startup process did not call SQL Server to get going again? I've looked in the Event Viewer (Application Log) and SQL's own file E:\MSSQL\MSSQL10.MSSQLSERVER\MSSQL\Log\ERRORLOG* but these only contain records of successful startup operations....nothing mentions a failed attempt to start a service or anything like that.

    Read the article

  • Need to know who is hogging my bandwidth?

    - by Dev
    I have an ethernet connection to my iMac and with Internet sharing I am broadcasting the wireless network from my mac rather than using a wireless router. I use it to connect other devices wirelessly to the internet. But this makes all the traffic flow through my iMac. I wanted a way to analyze the traffic so that I know what connected devices are hogging the bandwidth at a given time and from which websites? I installed wireshark for mac and played around a little but it seems like an overkill when you first look at it. Can someone please help with few instructions to get what I need or any other way other than using wireshark? Thanks Dev.

    Read the article

  • What are possible causes of keyboard lag on my desktop machine?

    - by Jer
    I am running Windows 7 and began experiencing keyboard lag in most applications, and it seems to be getting worse. Certain websites are the worst - on some, I can type a sentence, take my hands off the keyboard, and watch the characters continue to appear on the screen for several seconds. Others are not as bad, but still noticeable and annoying. I just started noticing it in non-browser applications (e.g. Outlook) as well. I've disabled all extensions in Firefox, rebooted my machine, and that did nothing. There is nothing using much memory or cpu cycles, even when the lag is occurring. This is a machine at work with very strict controls over what can be installed, so the chances of any kind of malware are very slim. I don't believe anything as been installed since before the problem started. What could be causing this, and/or what can I do to debug?

    Read the article

  • web spidering/crawling, can i do it or just search engines?

    - by bboyreason
    i already had a question answered about web-scraping with wget. but as i read a little more, i realize i may be looking for a web-crawling program. particularly the part about web-crawlers being able to get specific data like links or, in my case, products. all of the products on my site have the following naming convention, website.com/uniqueAlphaNumericID.html as far as i know, no dynamic content generation is being used and only one page per one item in the above format. should i just be thinking about: wget website.com | grep *.html or should i be looking into spiders/crawlers?

    Read the article

  • Two routers, one off-site, same ISP-assigned static IP. A recipe for conflict?

    - by boost
    This is the situation I've inherited: There are two routers, one off-site. Both are connected to the ISP. The ISP assigns both of them the same static IP (or so it seems). Presumably, the network problems we're having are related to the idea that you can't have two instances of the same IP. So we rang up the folk off-site and told them to turn off the router. Now everything's working okay here. How do I get around this? Get another static IP? Figure out how to get the router to ask for a dynamic IP (as we're not using the static IP for anything)?

    Read the article

  • Have Free website Hosting with Google App Engine

    - by mickthompson
    I'm reading about Google App Engine. I'm creating a bunch of simple dynamic websites in java. I'm considering to use Google App Engine and setup my clients' website on it. In this way I've only to register a domain www.myclietdomain.com and then point that to the GoogleAppEngine application... In this way I plan to avoid hosting costs. Infact I'm paying even for hosting few static html pages... Do you think that is possible to use Google App Engine for this scope?

    Read the article

  • Can ping/nmap server, nothing else

    - by lowgain
    I was SSHed into our ubuntu LAMP server , and was just doing a svn update, which hung. I disconnected, and since then, I have not been able to SSH in or view any of our websites (neither from my network or through a remote machine). I would have just assumed the server went down, but I can ping the machine and get really quick responses. Using nmap on the box shows all the normal ports open, so I am confused This server is hosted remotely in a datacenter, do I have any remaining options except contacting them for support? Thanks!

    Read the article

  • Processing files from a Content Distribution Network problem

    - by Derek
    From what I understand that CDNs are meant to physically cache your static files in multiple regions closer to your users. However, I've noticed a few websites that when a page is requested from their server, they grab the asset files from their cdn, process them (compress, minify, etc.) cache the results on their server and then send them to the user requesting the page. This doesn't make too much sense to me. Wouldn't processing the files on your server eliminate the gains from using a cdn? Is this a normal way of doing things, or am I not understanding the whole asset management concept?

    Read the article

  • How should I organize my backups ?

    - by Patrick
    I'm using for the first time rsync to create daily backups of my websites and I was wondering if I should overwrite the previous copy or should I create multiple copies and overwrite only the oldest one ? (I might not have enough space for that, though). I actually have also this question. Let's suppose most of files are accidentally erases.. does rsync delete all these files from the backup space because they don't exist anymore ? How does exactly work in this case ? thanks

    Read the article

  • Word 2007 "Out of Memory or Disk Space" Error on launch

    - by Adam
    Word 2007 is installed on a Vista Home Premium machine and whenever it starts up it opens what appears to be a dynamic installer to do something and then throws up the "Out of Memory or Disk Space" error. Word 2007 never completes starting up. Reinstalling Word hasn't helped and if I can avoid reinstalling Windows until Windows 7 is released and get Word working in the mean time, that would be ideal. I've been looking around for a solution, once of which seemed to point to a problem with the user account. I created a second user on the machine and Word still had the same problem. The other solution that seems possible is a corrupted normal.dot/normal.dotm file. However, even in the location it should be, I can't seem to find it. Am I going in the right direction with this? Is there another solution I haven't come across that will fix this? If it is possible that renaming normal.dot/normal.dotm how can I find it?

    Read the article

  • Analytics on Mobile Phones

    - by Samuh
    Tracking events and setting up Analytics for Websites seems easy. You create an account with one of the Analytics service providers like Google. They give you javascript code that you embed in your pages (whichever event you wish to track) and voila..you're done. I have written a native application for Android phones, which is actually an adaptation of the actual web site. Now, I am required to setup Analytics and tracking for this native application. Question: How to do this on Mobile phones from within a native application? We have Java Script code that works for the original web site. Is there a way to incorporate that in the native application? I know Android supports Java Script via WebViews(Webkit);my application does not have webviews and it is native. Also, I have not worked on JavaScript since school so excuse me if I sound naive. Thanks.

    Read the article

  • Web Site Monitoring/Tracking Freeware

    - by jsmith
    I need to be able to track Web Sites visited on a computer and send them to an email address on a daily basis. Keylogger software seems like too much, I want something lightweight that simply monitors websites visited and forwards them on. I was hoping for freeware, but if it's cheap/simple and easy to use I'm willing to pay. I know similar questions have been asked about website traffic monitoring, but it's not quite the same thing, and I can't seem to find an answer to this question anywhere. Thank you ahead of time.

    Read the article

  • Reverse Proxy (mod_rewrite) and Rails (absolute paths)

    - by SooDesuNe
    I have front end rails app, that reverse proxies to any of a number of backend rails apps depending on URL, for example http://www.my_host.com/app_one reverse proxies to http://www.remote_host_running_app_one.com such that a URL like http://www.my_host.com/app_one/users will display the contents of http://www.remote_host_running_app_one.com/users I have a large, and ever expanding number of backends, so they can not be explicitly listed anywhere other than a database. This is no problem for mod_rewrite using a prg:/ rewrite map reverse proxy. The question is, the urls returned by rails helpers have the form /controller/action making them absolute to the root. This is a problem for the page served by mod_rewrite because links on the proxied page appear as absolute to the domain. i.e.: http://www.my_host.com/app_one/controller/action has links that end up looking like /controller/action/ when they need to look like /app_one/controller/action mod_proxy_html seems like the right idea, but it doesn't seem to be as dynamic as I would need, since the rules need to be hard coded into the config files. Is there a way to fix this server-side, so that the links will be routed correctly?

    Read the article

  • I can only see the first website alphabetically

    - by Victor
    I have 3 subdomains websites. Those are subdomain1.mydomain.com, subdomain2.mydomain.com, subdomain3.mydomain.com. I have point these to the external IP address. bind is ok, dig is onerror, Apache2 reload ok 1.) If I set the following, I can only see the first one alphabetically. NameVirtualHost *:80 <VirtualHost *:80> ServerName subdomain1.mydomain.com ... <VirtualHost *:80> ServerName subdomain2.mydomain.com ... 2.) If I set the following, I get file not found. Apache2 reload ok. NameVirtualHost mydomain.com:80 <VirtualHost mydomain.com:80> ServerName subdomain1.mydomain.com ... <VirtualHost mydomain.com:80> ServerName subdomain2.mydomain.com Please Help! What else should I do.

    Read the article

  • Using DNS entries to determine location

    - by Raphink
    I'm trying to think of a clean way to determine the location of machines (mainly, which datacenter they belong to) based on their network settings. I would like it to be dynamic, and I'm thinking of using special DNS records that would be specific to the DNS server in each datacenter. For example, you could have: root@machine1# dig TXT mysite ... mysite 3600 IN TXT "DC1" ... root@machine2# dig TXT mysite ... mysite 3600 IN TXT "DC2" ... etc. I know that DNS has a special LOC record for location, but it takes coordinates, so it doesn't help in my case. Is there a standard way of addressing this issue, another special type of record for it, or some standard entries in TXT records?

    Read the article

  • How to reference individual cells in Excel to variable data from records in an external SQL table

    - by user273476
    I have a SQL table containing date oriented financial data eg. multiple daily records with fields for Date, Account code and Value. I want to set up dynamic links (formulas) from cells in an Excel speadsheet to this data so when the spreadsheet is loaded the data is fetched from all the relevant records. The spreadsheet has the Account codes on the x axis and Dates on the y. Each day the SQL table has new data in it for the new day and I want the spreadsheet to reference this new data for the column for the new day. Any ideas? I have seen how you can generally bring in data from a SQL table (in our case using ODBC as it is not MS SQL) but the data is not simply bringing in multiple records as you would a CVS file but specific records in the SQL table referencing to specific cells and columns in the spreadsheet.

    Read the article

  • Best way to use mod_rewrite to replace WordPress pages with static files

    - by David Moles
    Here's the situation: I've got an old WordPress installation that I'd like to archive as static files, but I'd also like to preserve old URLs. I've already created the static archive with wget and sorted out the filenames and links. Now I'd like to configure Apache to intercept requests for the old dynamic URL and replace them with the new static one, e.g.: http://www.example.org/log/?p=1234 or http://www.example.org/log/index.php?p=1234 should redirect to http://www.example.org/log/archives/1234.html I've tried adding the following to the VirtualHost config for example.org, but to no effect -- I just get the PHP page. RewriteCond %{REQUEST_URI} /log/ RewriteCond %{QUERY_STRING} p=([^&;]*) RewriteRule ^/$ http://%{SERVER_NAME}/log/archives/%1.html [R,L] I've enabled logging and I can see what look like other rules being applied, but not this one. None of my other guesses at match patterns for %{REQUEST_URI} seem to have any effect either (log, log/, log.*, even .*). I'm new to mod_rewrite and this is mostly cargo cult, so I'm pretty sure I've gotten it wrong. Anyone know what I should be doing here?

    Read the article

< Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >