Search Results

Search found 17852 results on 715 pages for 'load balancer'.

Page 411/715 | < Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >

  • Can't get Unity 3D to work in 11.10

    - by pmoseph
    I recently upgraded to 11.10 on my Lenovo ThinkPad T520, and I'm not able to load Unity 3D (I'm not selecting 2D at login menu either). me@mycomp:~$ echo $DESKTOP_SESSION ubuntu-2d I ran the unity support test below as well. me@mycomp:~$ /usr/lib/nux/unity_support_test -p Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Error: unable to create the OpenGL context And it looks like I only have one graphics card: me@mycomp:~$ lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) Also, Ubuntu lists nothing under the "Additional Drivers" window. Any help would be extremely appreciated as I'm somewhat of a noob. Thanks! Edit 1: Here is the output of lshw -C display me@mycomp:~$ sudo lshw -C display *-display description: VGA compatible controller product: 2nd Generation Core Processor Family Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 09 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:43 memory:f0000000-f03fffff memory:e0000000-efffffff ioport:5000(size=64)

    Read the article

  • Can I use a 302 redirect to serve up static content from an URL with escaped_fragment?

    - by Starfs
    We would like to serve up SEO-friendly Ajax-driven content. We are following this documentation. Has anyone ever tried to write a 302 redirect into the .htaccess file, that takes the ?_escaped_fragment= string and send that to a static page?, for example /snapshot/yourfilename/. How will Google react to this? I've gone through the documentation and it's not very clear. The below quote is from Google's documentation this is what I find. I'm not sure if they are saying that you can redirect the _escaped_fragment_ URL to a different static page, or if this is to redirect the hashtag URL to static content? Thoughts? From Google's site: Question: Can I use redirects to point the crawler at my static content? Redirects are okay to use, as long as they eventually get you to a page that's equivalent to what the user would see on the #! version of the page. This may be more convenient for some webmasters than serving up the content directly. If you choose this approach, please keep the following in mind: Compared to serving the content directly, using redirects will result in extra traffic because the crawler has to follow redirects to get the content. This will result in a somewhat higher number of fetches/second in crawl activity. Note that if you use a permanent (301) redirect, the url shown in our search results will typically be the target of the redirect, whereas if a temporary (302) redirect is used, we'll typically show the #! url in search results. Depending on how your site is set up, showing #! may produce a better user experience, because the user will be taken straight into the AJAX experience from the Google search results page. Clicking on a static page will take them to the static content, and they may experience avoidable extra page load time if the site later wants to switch them to the AJAX experience.

    Read the article

  • SDL: Clipping a Sprite Sheet from Left to Right

    - by 0X1A
    I'm trying to get a sprite sheet clipped in the right order but I'm a bit stumped, every iteration I've tried has tended to be in the wrong order. This is my current implementation. Frames = (TempSurface-h / ClipHeight) * (TempSurface-w / ClipWidth); SDL_Rect Clips[Frames]; for (i = 0; i < Frames; i++) { if (i != 0 && i % (TempSurface-h / ClipHeight) == 0) ColumnIndex++; Clips[i].x = ColumnIndex * ClipWidth; Clips[i].y = i % (TempSurface-h / ClipHeight) * ClipHeight; Clips[i].w = ClipWidth; Clips[i].h = ClipHeight; Where TempSurface is the entire sheet loaded to a SDL_Surface and Clips[] is an array of SDL_Rects. What results from this is a sprite sheet set to SDL_Rects in the wrong order. For example a sheet of dimensions 4x4 would load desirably as this: | 0 | 1 | 2 | 3 | | 4 | 5 | 6 | 7 | | 8 | 9 | 10| 11| | 12| 13| 14| 15| But would be set as this order: | 0 | 4 | 8 | 12| | 1 | 5 | 9 | 13| | 2 | 6 | 10| 14| | 3 | 7 | 11| 15| What should I be doing for these to be set correctly?

    Read the article

  • My NTFS Partition keeps becoming "unusable" on Ubuntu, Any Ideas?

    - by gopherman
    I just purchased a new 2TB Drive External Seagate, My main system uses both Windows and Ubuntu So I am pretty much stuck with keeping my drive as NTFS. I have done this without any problems before but since I got this new drive I have been having issues. When I first load up Ubuntu the drive mounts and runs fine, after an unspecified amount of time i start getting Input/Output errors when accessing the drive. When I goto the Disk Utility I get a message stating the drive is "Unknown or Unused", If I disconnect and reconnect the drive or reboot everything is fine again. There's no errors coming up with S.M.A.R.T and it seems to work fine while under windows. Any thoughts?

    Read the article

  • My NTFS Partition keeps becoming "unusable" on Ubuntu, Any Ideas?

    - by gopherman
    I just purchased a new 2TB Drive External Seagate, My main system uses both Windows and Ubuntu So I am pretty much stuck with keeping my drive as NTFS. I have done this without any problems before but since I got this new drive I have been having issues. When I first load up Ubuntu the drive mounts and runs fine, after an unspecified amount of time i start getting Input/Output errors when accessing the drive. When I goto the Disk Utility I get a message stating the drive is "Unknown or Unused", If I disconnect and reconnect the drive or reboot everything is fine again. There's no errors coming up with S.M.A.R.T and it seems to work fine while under windows. Any thoughts?

    Read the article

  • SQLIO help decipher output

    - by SQL Learner
    When load testing on a SQL Server Box, using following (testfile is 25 GB) sqlio -kW -t8 -s360 -o8 -frandom -b8 -BH -LS g:\testfile.dat > result.txt sqlio -kW -t8 -s360 -o8 -frandom -b64 -BH -LS g:\testfile.dat >> result.txt sqlio -kW -t8 -s360 -o8 -frandom -b128 -BH -LS g:\testfile.dat >> result.txt sqlio -kW -t8 -s360 -o8 -frandom -b256 -BH -LS g:\testfile.dat >> result.txt Can anyone help me decipher output.. I do not understand latency min and average....? What does this number means IOs/sec: 10968.80 MBs/sec: 685.55 latency metrics: Min_Latency(ms): 1 Avg_Latency(ms): 5 Max_Latency(ms): 21

    Read the article

  • I get a 403 when requesting a JS file from CloudFront

    - by Roland
    This is new to me so please excuse me if I have no idea what I'm talking about (: I'm trying to set up my own CDN with CloudFront and S3 through a subdomain by adding a CNAME to that subdomain to point to the CloudFront. It seems like I get a 403 when trying to load the file, this is the original s3 link : https://s3.amazonaws.com/chaoscod3r_aws_cdn/libs/polyfills/json3_polyfill.js ; which seems to be working after setting the permission to everyone to open / download. But when trying to use the subdomain to request the file : http://cdn.chaoscod3r.com/libs/polyfills/json3_polyfill.js ; it seems like I get that 403. Could anyone help me out with this one ?

    Read the article

  • Most secure way to have IPtables auto-loaded using Debian / Linux

    - by networkIT
    I'd like to know the safest way to load iptables using Debian. Of course, I can use a script that uses iptables-restore : #!/bin/sh iptables-restore < /etc/firewall.conf but : 1) where is the safest place to have it loaded ? /etc/network/if-up.d ? I'm concerned about the script being loaded early enough at boot time, and reliably enough when plugging/unplugging interfaces ... 2) is this script method using iptables-restore the most secure way ? 3) additionnally, how much does the answer validity stretch to other Linux distros ( Ubuntu, Fedora, CentOS ) ? Thanks ^^

    Read the article

  • vps running out of memory, 200MB free

    - by demon
    At the beginning of this year I took a VPS for my website because I was running against the resource limits from a shared hosting. Here are the things I know: 2GB memory, with 1GB swap Debian X64 server ED installed Software running on the webserver: mysql apache postfix pop3 imap amavisd clamd cron fail2ban munin-node pure-ftpd spamd nginx Now for the setup: Nginx listens on port 80 and handles the static files, the php side is done by apache2 running mod_php in combi with apc(no var caching!). Iam using a pretty 'busy' drupal and phpbb stack on the server, for drupal iam using boost and authcache to handle of the server load with a pressflow stack. phpbb is just phpbb3 with some mods installed, but has at max 30 users online at a time.. The problem is that its staring to use the swap after a few days after a reboot and thus the site becomes slower. I'v added pictures of monit and munin, so maybe somebody can help me out... Monit: Munin:

    Read the article

  • Updating wordpress in a multi-node environment

    - by Peter
    I'm finding this very tricky in a multi node environment, with code under revision control. AKA. multiple frontends and single database. I have a deployment process that pushes a git repo to the servers, but obviously if I update Wordpress from within the admin panel, it will update the files to one FE. Then I would need to copy over the new files to the other FE nodes. Plus, whenever these changes are written when Wordpress updates on a node, it writes code into the git repo. As such, it then breaks the auto deploys that perform 'git pulls', as it then has untracked changes and refuses to pull in new deploys unless manually intervened. How does one easily keep Wordpress updated in a multi node (load balanced) environment?

    Read the article

  • APF, IPTABLES, Fedora 15 - Not blocking correctly

    - by RichardW11
    I just got a new remote server which came with Fedora 15. I first tried to run APF but it gave me this error "apf(18031): {glob} unable to load iptables module (ip_tables), aborting.". Which I then set SET_MONOKERN="0" to SET_MONOKERN="1" to resolve the problem. However, with my config file showing BLK_P2P_PORTS="1214,2323,4660_4678,6257,6699,6346,6347,6881_6889,6346,7778" The ports show up as closed, instead of being filtered. Any idea why this would be happening? 22/tcp open ssh 80/tcp open http 443/tcp open https 2323/tcp closed 3d-nfsd 4662/tcp closed edonkey 6346/tcp closed gnutella 6699/tcp closed napster 6881/tcp closed bittorrent-tracker 7778/tcp closed interwise

    Read the article

  • How do I recover an accidentaly closed Opera window?

    - by Kostas
    Hello there and thanks for all the help! I accidentaly closed a window with multiple tabs in Opera. There was another window with a couple of tabs running. I have closed and restarted Opera in the hope it will retrieve the windows at startup (it usually asks if I want to continue from last time). Is there a way I can retrieve my closed window with all the tabs? I cannot see them in the history either, probably because When I switched-on my PC the Internet connection was down and the pages didn't load :-/

    Read the article

  • Recover an accidentaly closed Opera window after restarting Opera?

    - by Kostas
    Hello there and thanks for all the help! I accidentaly closed a window with multiple tabs in Opera. There was another window with a couple of tabs running. I have closed and restarted Opera in the hope it will retrieve the windows at startup (it usually asks if I want to continue from last time). Is there a way I can retrieve my closed window with all the tabs? I cannot see them in the history either, probably because When I switched-on my PC the Internet connection was down and the pages didn't load :-/

    Read the article

  • Need solutions in sharing a 3Mb/768Kbps DSL line to 60+ users and faster bandwidth

    - by elistp
    Two parts. Part 1: We currently have 2 DSL Lines with 3Mb/768Kbps speeds load balanced for 60+ users. Accessing the Internet is borderline unusable. The simple solution would be to get a faster DSL Line but the highest DSL package is 6Mb/768Kbps, has quite the price jump, and doesn't do anything to help with upload speeds. I'm looking for free or extremely low cost solutions (web cache, traffic shaping, bandwidth controls, etc) to help with making Internet access more bearable until the next funding year. Can anyone give any advice? Part 2: We're looking into a 4.5Mb bonded T1 in the next funding year which is of course significantly more expensive than 2 DSL lines. Are bonded T1s our only hope for faster speeds? Are there any better alternatives?

    Read the article

  • SEO: Getting site to show in location-specific searches

    - by willvv
    I'm really new to this SEO world and I've been reading a lot to try and figure it out. We have a site moodbond.com that allows users to browse/create events anywhere. And we fill it with content from the main cities in the US. We would like it to show for searches for things like "events in san francisco" or "what to do in new york", however, since the site is not really location-specific, I'm not really sure where to begin. I've been thinking a couple of things, maybe you can help me decide if these would be a good way to start or if I should try something different. 1- Allow something like location-specific urls (e.g. moodbond.com/browse/san-francisco) could just show the main page centered in San Francisco. 2- Change the headers/title of the page so it adapts automatically to the city being browsed (and change this dynamically as the user changes the location of the map). 3- Add internal links to different locations (e.g. add a link at the footer of the page that says "Events in Seattle" that makes the site load events in that city. (this would probably depend on implementing #1). What do you guys think? will any of these really help or should I look for a different approach? any advice is welcome. Thanks

    Read the article

  • Accessing server by dedicated IP address

    - by Sherwin Flight
    I'm having an issue with my hosting provider after migrating to a new account. It's taking some time to get the problem sorted out, so I am hoping someone here can shed some light on the situation. The server is running WHM/cPanel, and the site I am trying to access has a dedicated IP address. When I connect to the server like this http://IP.HERE instead of showing my the website the way I would expect, it is showing the contents of a subfolder. So, while I would expect it to load public_html/ it is loading public_html/somefolder/ instead. Any idea why this is happening instead of showing the sites homepage the way I would expect? EDIT It is not redirecting, so the url is just http://IP.ADDRESS/, but the files listed are from a subfolder. So, it LOOKS at though I went to http://IP.ADDRESS/subfolder, when the URL says it should be showing the main folder contents. When I access the site using the domain name, it works properly, so I assume the document root is set correctly.

    Read the article

  • Statsd, Graphite and graphs

    - by w00t
    I've setup Graphite and statsd and both are running well. I'm using the example-client.py from graphite/examples to measure load values and it's OK. I started doing tests with statsd and at first it seemed ok because it generated some graphs but now it doesn't look quite well. First, this is my storage-schema.conf: pattern = .* retentions = 10:2160,60:10080,600:262974 I'm using this command to send data to statsd: echo 'ssh.invalid_users:1|c'| nc -w 1 -u localhost 8126 it executes, I click Update Graph in the Graphite web interface, it generates a line, hit again Update and the line disappears. If I execute the previous command 5 times, the graph line will reach 2 and it will actually save it. Again running the same command two times, graph line reaches 2 and disappears. I can't find what I have misconfigured. The intended use is this: tail -n 0 -f /var/log/auth.log|grep --line-buffered "Invalid user" | while read line; do echo "ssh.invalid_users:1|c" | nc -w 1 -u localhost 8126; done

    Read the article

  • PHP 5.4.7 and Apache 2.2 Trouble under Windows XP

    - by IssamTP
    I'm trying to setup a test enviroment on a virtual machine running Windows XP Home (totally updated), with Apache 2.2 and PHP 5.4.7. I can load the PHP 5 module inside httpd.conf and if I don't rename php.ini-DEVELOPMENT (or -PRODUCTION) file to php.ini, the engine works fine. This basic configuration doesn't have MySQL module loaded, so I have to rename rename the .ini-DEVELOPMENT into .ini and edit as follows: ; Directory in which the loadable extensions (modules) reside. ; http://php.net/extension-dir ; extension_dir = "./" ; On windows: extension_dir = "C:/php/ext/" ... extension=php_mysql.dll extension=php_mysqli.dll Restart Apache with no problems and... all I can get is a blank page. Where can I see some error or do you know where is the trouble? Tell me if I need to post something else to give you more details.

    Read the article

  • NFS users getting a laggy GUI expierence

    - by elzilrac
    I am setting up a system (ubuntu 12.04) that uses ldap, pam, and autofs to load users and their home folders from a remote server. One of the options for login is sitting down at the machine and starting a GUI session. Programs such as chormium (browser) that preform many read/write operations in the ~/.cache and ~/.config files are slowing down the GUI experience as well as putting strain of the NFS server that is causing other users to have problems. Ubuntu had the handy-dandy XDG_CONFIG_HOME and XDG_CACHE_HOME variables that can be set to change the default location of .cache and .config from the home folder to somewhere else. There are several places to set them, but most of them are not optimal. /etc/environment pros: will work across all shells cons: cannot use variables like $USER so that you can't make users have different new locations for .cache and .config. Every users' new location would be the same directory. /etc/bash.bashrc pros: $USER works, so you can place them in different folders cons: only gets run for bash compatible shells ~/.pam_environment pros: works regardless of shell cons: cannot use system variables (like $USER), has it's own syntax, and has to be created for every user

    Read the article

  • VirtualBox: Grub sees hard drive, Linux does not

    - by thabubble
    I installed Linux on my second hard drive. I can boot to it just fine. But when I try to boot it from a Windows 7 host using http://www.virtualbox.org/manual/ch09.html#rawdisk, grub sees it and can load vmlinuz and initramfs. Log: :: running early hook [udev] :: running hook [udev] :: Triggering uevents... :: running hook [plymouth] :: Loading plymouth...done. ... Waiting 10 seconds for device /dev/disk/by-uuid/{root UUID} ... ERROR: device 'UUID={root UUID}' not found. Skipping fsck. ERROR: Unable to find root device 'UUID={root UUID}' It then drops me into a recovery shell. I checked "/etc/fstab" and it's empty, there are also no sd* devices in dev, the only thing in /dev/disk/by-id is a VBox CD device. I'm not too good with these kinds of things so help would be greatly appriciated.

    Read the article

  • Keep Uploaded Files in Sync Across Multiple Servers - LAMP

    - by Dfranc3373
    I have a website right now that is currently utilizing 2 servers, a application server and a database server, however the load on the application server is increasing so we are going to add a second application server. The problem I have is that the website has users upload files to the server. How do I get the uploaded files on both of the servers? I do not want to store images directly in a database as our application is database intensive already. Is there a way to sync the servers across each other or is there something else I can do? Any help would be appreciated. Thanks

    Read the article

  • SAMBA and Linux ACLs -- "Permission denied" on write to share but file written nevertheless

    - by MCH
    I set up a writable share directory "/home/net/share" with acl like this: sudo mkdir -p "/home/net/share" sudo setfacl -m "u:localuser:rwx,u:remoteuser:rwx,g:users:rwx" "/home/net/share" My /etc/samba/smb.conf looks like this: [global] workgroup = w server string = server security = user load printers = no log file = /var/log/samba/%m.log max log size = 50 dns proxy = no printing = bsd printcap name = /dev/null disable spoolss = yes encrypt passwords = true invalid users = nobody root follow symlinks = yes wide links = yes [share] comment = Writable by localuser and remoteuser path = /home/net/share valid users = remoteuser read only = no public = no printable = no Locally, localuser and remoteuser have user accounts and smbpasswds and can both read, create and delete files in /home/net/share. But when I log on from a different machine (like this: sudo mount -t cifs //server/share mountpoint/ -o username=remoteuser ), I get "Permission denied" both when trying to create directories and files, oddly though, it does create files (not directories!) despite these messages! How can I get this working?

    Read the article

  • Upload large database SQL file

    - by Devy
    I've a database of more than 20Gb of size on my hard disk. What is the best way to upload it with the least (money) load possible on the server? - I'm on Windows 7. - I have FTP and SSH access on the server. I avoid using FTP because my connection cuts off a lot, I can't imagine I re-upload again the file after failing on 99%. I found some tools that split the large .sql file to small .sql files, but they didn't mention how to gather these files again into one file. Another way is to archive the big .sql file to .rar with -v option, upload them through FTP then unpack them. But unpacking will also cost, right? I know it will cost in any cases, but any best practice will be strongly appreciated.

    Read the article

  • OSB, Service Callouts and OQL

    - by Sabha
    Oracle Fusion Middleware customers use Oracle Service Bus (OSB) for virtualizing Service endpoints and implementing stateless service orchestrations. Behind the performance and speed of OSB, there are a couple of key design implementations that can affect application performance and behavior under heavy load. One of the heavily used feature in OSB is the Service Callout pipeline action for message enrichment and invoking multiple services as part of one single orchestration. Overuse of this feature, without understanding its internal implementation, can lead to serious problems. This series will delve into OSB internals, the problem associated with usage of Service Callout under high loads, diagnosing it via thread dump and heap dump analysis using tools like ThreadLogic and OQL (Object Query Language) and resolving it. The first section in the series will mainly cover the threading model used internally by OSB for implementing Route Vs. Service Callouts. The second section of the "OSB, Service Callouts and OQL" blog posting will delve into thread dump analysis of OSB server and detecting threading issues relating to Service Callout and using Heap Dump and OQL to identify the related Proxies and Business services involved. The final section of the series will focus on the corrective action to avoid Service Callout related OSB serer hangs. Before we dive into the solution, we need to briefly discus about Work Managers in WLS. Please refer to the blog posting for more details.

    Read the article

  • Protected flash video (requires HAL) on Gentoo

    - by Mala
    I am unable to play "protected" flash video, such as Amazon Prime Instant Video. From what I've read and uncovered, this seems to be due to a lack of HAL being installed on my computer. Confirmation that it is required for protected video can be seen towards the beginning of http://helpx.adobe.com/x-productkb/multi/flash-player-11-problems-playing.html However, hal is not in the gentoo portage tree, and in any case has been deprecated and replaced by udev. How can I go about getting Amazon Prime Instant Video to work again? I was considering grabbing the source from http://www.freedesktop.org/wiki/Software/hal but the links there won't load, and trying to install it from old ebuilds or from overlays which claim to still support it (e.g. kde-sunset) result in a compilation error: In file included from addon-generic-backlight.c:38:0: /usr/include/glib-2.0/glib/gmain.h:21:2: error: #error "Only <glib.h> can be included directly." Has anyone else solved this issue?

    Read the article

< Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >