Search Results

Search found 22993 results on 920 pages for 'global load balancing'.

Page 575/920 | < Previous Page | 571 572 573 574 575 576 577 578 579 580 581 582  | Next Page >

  • Recover an accidentaly closed Opera window after restarting Opera?

    - by Kostas
    Hello there and thanks for all the help! I accidentaly closed a window with multiple tabs in Opera. There was another window with a couple of tabs running. I have closed and restarted Opera in the hope it will retrieve the windows at startup (it usually asks if I want to continue from last time). Is there a way I can retrieve my closed window with all the tabs? I cannot see them in the history either, probably because When I switched-on my PC the Internet connection was down and the pages didn't load :-/

    Read the article

  • My NTFS Partition keeps becoming "unusable" on Ubuntu, Any Ideas?

    - by gopherman
    I just purchased a new 2TB Drive External Seagate, My main system uses both Windows and Ubuntu So I am pretty much stuck with keeping my drive as NTFS. I have done this without any problems before but since I got this new drive I have been having issues. When I first load up Ubuntu the drive mounts and runs fine, after an unspecified amount of time i start getting Input/Output errors when accessing the drive. When I goto the Disk Utility I get a message stating the drive is "Unknown or Unused", If I disconnect and reconnect the drive or reboot everything is fine again. There's no errors coming up with S.M.A.R.T and it seems to work fine while under windows. Any thoughts?

    Read the article

  • My NTFS Partition keeps becoming "unusable" on Ubuntu, Any Ideas?

    - by gopherman
    I just purchased a new 2TB Drive External Seagate, My main system uses both Windows and Ubuntu So I am pretty much stuck with keeping my drive as NTFS. I have done this without any problems before but since I got this new drive I have been having issues. When I first load up Ubuntu the drive mounts and runs fine, after an unspecified amount of time i start getting Input/Output errors when accessing the drive. When I goto the Disk Utility I get a message stating the drive is "Unknown or Unused", If I disconnect and reconnect the drive or reboot everything is fine again. There's no errors coming up with S.M.A.R.T and it seems to work fine while under windows. Any thoughts?

    Read the article

  • Upload large database SQL file

    - by Devy
    I've a database of more than 20Gb of size on my hard disk. What is the best way to upload it with the least (money) load possible on the server? - I'm on Windows 7. - I have FTP and SSH access on the server. I avoid using FTP because my connection cuts off a lot, I can't imagine I re-upload again the file after failing on 99%. I found some tools that split the large .sql file to small .sql files, but they didn't mention how to gather these files again into one file. Another way is to archive the big .sql file to .rar with -v option, upload them through FTP then unpack them. But unpacking will also cost, right? I know it will cost in any cases, but any best practice will be strongly appreciated.

    Read the article

  • Can I use a 302 redirect to serve up static content from an URL with escaped_fragment?

    - by Starfs
    We would like to serve up SEO-friendly Ajax-driven content. We are following this documentation. Has anyone ever tried to write a 302 redirect into the .htaccess file, that takes the ?_escaped_fragment= string and send that to a static page?, for example /snapshot/yourfilename/. How will Google react to this? I've gone through the documentation and it's not very clear. The below quote is from Google's documentation this is what I find. I'm not sure if they are saying that you can redirect the _escaped_fragment_ URL to a different static page, or if this is to redirect the hashtag URL to static content? Thoughts? From Google's site: Question: Can I use redirects to point the crawler at my static content? Redirects are okay to use, as long as they eventually get you to a page that's equivalent to what the user would see on the #! version of the page. This may be more convenient for some webmasters than serving up the content directly. If you choose this approach, please keep the following in mind: Compared to serving the content directly, using redirects will result in extra traffic because the crawler has to follow redirects to get the content. This will result in a somewhat higher number of fetches/second in crawl activity. Note that if you use a permanent (301) redirect, the url shown in our search results will typically be the target of the redirect, whereas if a temporary (302) redirect is used, we'll typically show the #! url in search results. Depending on how your site is set up, showing #! may produce a better user experience, because the user will be taken straight into the AJAX experience from the Google search results page. Clicking on a static page will take them to the static content, and they may experience avoidable extra page load time if the site later wants to switch them to the AJAX experience.

    Read the article

  • switch OFF syn cookies

    - by Nick
    We have several servers they have public IP's, but work together (one is with Load Balancer, orther with Apache Web server, other with MySQL and so on. Most of the ports are fire-walled, so only "local" servers can be connect there. However ALL servers have some ports that must be publicly open. We have SYN Cookies enabled and from time to time we got: possible SYN flooding on port 8080. Sending cookies. Port 8080 is not public. How we can switch OFF SYN Cookies for some ports (e.g. 8080, 3306 etc) or from some sources (e.g. our servers), but in same time SYN Cookies to be switched ON for all other ports, e.g. port 80. We found this similar problem, except our servers are with public IP's: SYN cookies on internal machines

    Read the article

  • Updating wordpress in a multi-node environment

    - by Peter
    I'm finding this very tricky in a multi node environment, with code under revision control. AKA. multiple frontends and single database. I have a deployment process that pushes a git repo to the servers, but obviously if I update Wordpress from within the admin panel, it will update the files to one FE. Then I would need to copy over the new files to the other FE nodes. Plus, whenever these changes are written when Wordpress updates on a node, it writes code into the git repo. As such, it then breaks the auto deploys that perform 'git pulls', as it then has untracked changes and refuses to pull in new deploys unless manually intervened. How does one easily keep Wordpress updated in a multi node (load balanced) environment?

    Read the article

  • Can't get Unity 3D to work in 11.10

    - by pmoseph
    I recently upgraded to 11.10 on my Lenovo ThinkPad T520, and I'm not able to load Unity 3D (I'm not selecting 2D at login menu either). me@mycomp:~$ echo $DESKTOP_SESSION ubuntu-2d I ran the unity support test below as well. me@mycomp:~$ /usr/lib/nux/unity_support_test -p Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Error: unable to create the OpenGL context And it looks like I only have one graphics card: me@mycomp:~$ lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) Also, Ubuntu lists nothing under the "Additional Drivers" window. Any help would be extremely appreciated as I'm somewhat of a noob. Thanks! Edit 1: Here is the output of lshw -C display me@mycomp:~$ sudo lshw -C display *-display description: VGA compatible controller product: 2nd Generation Core Processor Family Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 09 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:43 memory:f0000000-f03fffff memory:e0000000-efffffff ioport:5000(size=64)

    Read the article

  • Xnest from Mac OSX to HP UX

    - by Burbas
    Hi, I'm trying to connect to my HP/UX machine from Mac OS X using Xnest. The problem is that I can not get the keyboard to work. I can see the login-prompt, but unable to type in my username. The connection is done by: Xnest :1 -query 192.168.0.193 -geometry 1280x1024 and it gives me some errors: (EE) AIGLX error: dlopen of /usr/X11/lib/dri/swrast_dri.so failed (dlopen(/usr/X11/lib/dri/swrast_dri.so, 5): image not found) (EE) GLX: could not load software renderer (EE) XKB: Couldn't open rules file /usr/X11/share/X11/xkb/rules/base (EE) XKB: No components provided for device Virtual core keyboard Couldn't get keyboard. I hope there is someone here who might have the answer :-).

    Read the article

  • Wireless only connects to Google

    - by dmanxiii
    I am using a netgear wireless router to connect to create a wireless network. When connected to the router via an Ethernet cable everything works correctly. When connecting to the router via wireless only Google can be accessed. I've done searches via Google using random words to make sure that it is not a cached page, and these all work. However, trying to connect to any non Google domain the page fails to load. I've accessed the connection with two different laptops, both are running Windows Vista. The network connection is strong on both, but the issue persists for both. Edit: I have configured the wireless networks name and password, and connect to that using the credentials I have defined. I am sure I am not connecting to a neighbors wireless network. Edit2: Typing in the direct IP address for CNN.com ( http://157.166.255.19 ) would not resolve.

    Read the article

  • Is Your Company Social on the Inside?

    - by Mike Stiles
    As we talk about the extension of social from an outbound-facing marketing tool to a platform that will reach across the entire enterprise, servicing multiple functions of that enterprise, it might be time to take a look at how social can be effectively employed for internal communications. Remember the printed company newsletter? Yeah, nobody reads it. Remember the emailed company newsletter? Yeah, nobody reads it. Why not? Shouldn’t your employees care about the company more than anything else in life and be voraciously hungry for any information related to it? The more realistic prospect is that a company’s employees don’t behave much differently at work where information is concerned than they do in their personal lives. They “tune in” to information that’s immediately relevant to them, that peaks their interest, and/or that’s presented in a visually engaging way. That currently makes an internal social platform the most ideal way to communicate within the organization. It not only facilitates more immediate, more targeted (and thus more relevant) messaging from the company out to employees, it sets a stage for employees to communicate with each other and efficiently get answers to questions from peers. It’s a collaboration tool on steroids. If you build such an internal social portal and you do it right, will employees use it? Considering social media has officially been declared more addictive than cigarettes, booze and sex…probably. But what does it mean to do an internal social platform “right”? The bar has been set pretty high. Your employees are used to Twitter and Facebook, and would roll their eyes at anything less simple or harder to navigate than those. All the Facebook best practices would apply to your internal social as well, including the importance of managing posting frequency, using photos and video, moderation & response, etc. And don’t worry, you won’t be the first to jump in. WPP's global digital agency Possible has its own social network called Colab. Nestle has “The Nest.” Red Robin’s got one. I myself got an in-depth look at McGraw-Hill’s internal social platform at Blogwell NYC. Some of these companies are building their own platforms, others are buying them off the shelf or customizing readymade solutions. But you won’t be the last either. Prescient Digital Media and the IABC learned 39% of companies don’t offer employees any social tools. Not a social network, not discussion forums, not even IM. And a great many continue to ban the use of Facebook and Twitter on the premises. That’s pretty astonishing since social has become as essential a modern day communications tool as the telephone. But such holdouts will pay a big price for being mired in fear while competitors exploit social connections unchallenged. Fish where the fish are. If social has become the way people communicate and take in information, let that be the way communication is trafficked in the organization.

    Read the article

  • Statsd, Graphite and graphs

    - by w00t
    I've setup Graphite and statsd and both are running well. I'm using the example-client.py from graphite/examples to measure load values and it's OK. I started doing tests with statsd and at first it seemed ok because it generated some graphs but now it doesn't look quite well. First, this is my storage-schema.conf: pattern = .* retentions = 10:2160,60:10080,600:262974 I'm using this command to send data to statsd: echo 'ssh.invalid_users:1|c'| nc -w 1 -u localhost 8126 it executes, I click Update Graph in the Graphite web interface, it generates a line, hit again Update and the line disappears. If I execute the previous command 5 times, the graph line will reach 2 and it will actually save it. Again running the same command two times, graph line reaches 2 and disappears. I can't find what I have misconfigured. The intended use is this: tail -n 0 -f /var/log/auth.log|grep --line-buffered "Invalid user" | while read line; do echo "ssh.invalid_users:1|c" | nc -w 1 -u localhost 8126; done

    Read the article

  • need for tcp fine-tuning on heavily used proxy server

    - by Vijay Gharge
    Hi all, I am using squid like Internet proxy server on RHEL 4 update 6 & 8 with quite heavy load i.e. 8k established connections during peak hour. Without depending much on application provider's expertise I want to achieve maximum o/p from linux. W.r.t. that I have certain questions as following: How to find out if there is scope for further tcp fine-tuning (without exhausting available resources) as the benchmark values given by vendor looks poor! Is there any parameter value that is available from OS / network stack that will show me the results. If at all there is scope, how shall I identify & configure OS tcp stack parameters i.e. using sysctl or any specific parameter Post tuning how shall I clearly measure performance enhancement / degradation ?

    Read the article

  • How do I configure namecheap for "arbitrarily-nested" wildcard subdomains?

    - by rabidsnail
    I'm trying to set up something like nyud.net, where any arbitrary chain of subdomains resolves to the same CNAME record (which in my case points to an amazon elastic load balancer). Ex: www.gogle.com.nyud.net:8080 points to one of their cache servers, which looks at the HOST header and returns www.google.com. I'm using namecheap as my dns host. Adding a CNAME record for *.mydomain.com doesn't seem to do anything (nslookup gives NXDOMAIN for all subdomains). What do I have to do to set this up? Do I have to use something fancier than namecheap (like route53)?

    Read the article

  • migrating storage to a different controller

    - by bellocarico
    Hello, I've just purcheased a couple of adaptec controller (2405/5405) for my ESXi 4.0 U1 servers. Currently ESXi and a couple of VMs are hosted on single sata boot disk connected to a nvidia on board non-RAID controller. I know that it's possible to migrate from single disk to RAID 1 with adaptec and I'm pleased with that, but I'm not sure if ESXi has already the right drivers installed/loaded for this controller. Is there any way I can check this? Is ESXi clever enough to recognize the new hardware and load the right module? Thanks

    Read the article

  • Can I make Puppet's module-to-file mapping to start searching at the top of the modules tree?

    - by John Siracusa
    Consider these two Puppet module files: # File modules/a/manifests/b/c.pp class a::b::c { include b::c } # File modules/b/manifests/c.pp class b::c { notify { "In b::c": } } It seems that when Puppet hits the include b::c directive in class a::b::c, it searches for the corresponding *.pp file by looking backwards from the current class and decides that it find the correct file located at ../../b/c.pp. In other words, it resolves b::c to the same *.pp file that the include b::c statement appears in: modules/a/manifests/b/c.pp I expected it (and would like it) to instead find and load the file modules/b/manifests/c.pp. Is there a way to make Puppet do this? If not, it seems to me that module names may not contain any other module names anywhere within them, which is a pretty surprising restriction.

    Read the article

  • Connection reseted- error 101

    - by Maja
    I have a problem with internet connection- when i try to load website, it will always write this error:Error 101 (net::ERR_CONNECTION_RESET): Connection was reseted. I'm using Win7 64 bit and I have this router: asus rt n10. First, i tried to change MTU from 1504 to 1472 and it worked for a while, but yesterday it started again. Now I have MTU 1440 but I don't want to lower it more. Is there another solution? Btw I have no malware in my laptop (I used the avast scan) and I've also deleted cookies and disabled proxy server (I'm not using any).

    Read the article

  • How do I recover an accidentaly closed Opera window?

    - by Kostas
    Hello there and thanks for all the help! I accidentaly closed a window with multiple tabs in Opera. There was another window with a couple of tabs running. I have closed and restarted Opera in the hope it will retrieve the windows at startup (it usually asks if I want to continue from last time). Is there a way I can retrieve my closed window with all the tabs? I cannot see them in the history either, probably because When I switched-on my PC the Internet connection was down and the pages didn't load :-/

    Read the article

  • VirtualBox: Grub sees hard drive, Linux does not

    - by thabubble
    I installed Linux on my second hard drive. I can boot to it just fine. But when I try to boot it from a Windows 7 host using http://www.virtualbox.org/manual/ch09.html#rawdisk, grub sees it and can load vmlinuz and initramfs. Log: :: running early hook [udev] :: running hook [udev] :: Triggering uevents... :: running hook [plymouth] :: Loading plymouth...done. ... Waiting 10 seconds for device /dev/disk/by-uuid/{root UUID} ... ERROR: device 'UUID={root UUID}' not found. Skipping fsck. ERROR: Unable to find root device 'UUID={root UUID}' It then drops me into a recovery shell. I checked "/etc/fstab" and it's empty, there are also no sd* devices in dev, the only thing in /dev/disk/by-id is a VBox CD device. I'm not too good with these kinds of things so help would be greatly appriciated.

    Read the article

  • APF, IPTABLES, Fedora 15 - Not blocking correctly

    - by RichardW11
    I just got a new remote server which came with Fedora 15. I first tried to run APF but it gave me this error "apf(18031): {glob} unable to load iptables module (ip_tables), aborting.". Which I then set SET_MONOKERN="0" to SET_MONOKERN="1" to resolve the problem. However, with my config file showing BLK_P2P_PORTS="1214,2323,4660_4678,6257,6699,6346,6347,6881_6889,6346,7778" The ports show up as closed, instead of being filtered. Any idea why this would be happening? 22/tcp open ssh 80/tcp open http 443/tcp open https 2323/tcp closed 3d-nfsd 4662/tcp closed edonkey 6346/tcp closed gnutella 6699/tcp closed napster 6881/tcp closed bittorrent-tracker 7778/tcp closed interwise

    Read the article

  • NFS users getting a laggy GUI expierence

    - by elzilrac
    I am setting up a system (ubuntu 12.04) that uses ldap, pam, and autofs to load users and their home folders from a remote server. One of the options for login is sitting down at the machine and starting a GUI session. Programs such as chormium (browser) that preform many read/write operations in the ~/.cache and ~/.config files are slowing down the GUI experience as well as putting strain of the NFS server that is causing other users to have problems. Ubuntu had the handy-dandy XDG_CONFIG_HOME and XDG_CACHE_HOME variables that can be set to change the default location of .cache and .config from the home folder to somewhere else. There are several places to set them, but most of them are not optimal. /etc/environment pros: will work across all shells cons: cannot use variables like $USER so that you can't make users have different new locations for .cache and .config. Every users' new location would be the same directory. /etc/bash.bashrc pros: $USER works, so you can place them in different folders cons: only gets run for bash compatible shells ~/.pam_environment pros: works regardless of shell cons: cannot use system variables (like $USER), has it's own syntax, and has to be created for every user

    Read the article

  • LSB Script: how do i know if something goes wrong?

    - by ianaz
    How do I know if a LSB script fails to load or where do I check the log of the lsbs scripts? I added two scripts with the following command: update-rc.d scriptname defaults And just one launches the things I need. It does not seem to be a script error since if I launch it with /etc/init.d/scriptname it works. This is my script: #!/bin/bash ### BEGIN INIT INFO # Provides: nodes # Required-Start: $remote_fs $syslog # Required-Stop: $remote_fs $syslog # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Starts all node apps # Description: Starts all node apps like AAM, AMT,... ### END INIT INFO echo "Launch Node applications with forever" export PATH=/usr/local/bin:$PATH # Starts the redis server redis-server # Starts AAM forever -o /var/log/AAM.log -e /var/log/AAM.log --spinSleepTime 2000 -m 5 start /var/nodejs/AAM/app.js

    Read the article

  • vps running out of memory, 200MB free

    - by demon
    At the beginning of this year I took a VPS for my website because I was running against the resource limits from a shared hosting. Here are the things I know: 2GB memory, with 1GB swap Debian X64 server ED installed Software running on the webserver: mysql apache postfix pop3 imap amavisd clamd cron fail2ban munin-node pure-ftpd spamd nginx Now for the setup: Nginx listens on port 80 and handles the static files, the php side is done by apache2 running mod_php in combi with apc(no var caching!). Iam using a pretty 'busy' drupal and phpbb stack on the server, for drupal iam using boost and authcache to handle of the server load with a pressflow stack. phpbb is just phpbb3 with some mods installed, but has at max 30 users online at a time.. The problem is that its staring to use the swap after a few days after a reboot and thus the site becomes slower. I'v added pictures of monit and munin, so maybe somebody can help me out... Monit: Munin:

    Read the article

  • Changing Vim Home Directory

    - by mcaaltuntas
    Previously I've been using vim without any problems. However a few months ago our company made some network and security updates. After that whenever I plug a network cable into my laptop, it creates a network shared drive "H" with my company name and when I try to open vim it doesn't load plugins and other things that are in my vim home directory. I have found the reason but I don't know how to solve it. The problem is that these network updates changed our HOME directory. When I write: echo $HOME It prints H. Before plugging in a network cable my home was C:\Users\blabla. How can I change my HOME variable? When I run set it prints: C:\Windows\System32>set | findstr /R "^HOME" HOMEDRIVE=H: HOMEPATH=\ HOMESHARE=\\companyname\blabla\username$

    Read the article

  • How to control messages to the same port from different emitters?

    - by Alex In Paris
    Scene: A company has many factories X, each emits a message to the same receive port in a Biztalk server Y; if all messages are processed without much delay, each will trigger an outgoing message to another system Z. Problem: Sometimes a factory loses its connection for a half-day or more and, when the connection is reestablished, thousands of messages get emitted. Now, the messages still get processed well by Y (Biztalk can easily handle the load) but system Z can't handle the flood and may lock up and severely delay the processing of all other messages from the other X. What is the solution? Creating multiple receive locations that permits us to pause one X or another would lose us information if the factory isn't smart enough to know whether the message was received or not. What is the basic pattern to apply in Biztalk for this problem? Would some throttling parameters help to limit the flow from any one X? Or are their techniques on the end part of Y which I should use instead ? I would prefer this last one since I can be confident that the message box will remember any failures, which could then be resumed.

    Read the article

< Previous Page | 571 572 573 574 575 576 577 578 579 580 581 582  | Next Page >