Search Results

Search found 20799 results on 832 pages for 'long integer'.

Page 629/832 | < Previous Page | 625 626 627 628 629 630 631 632 633 634 635 636  | Next Page >

  • Acer Aspire One -- strange battery problem, charges only up to ~90%

    - by houbysoft
    I have this strange problem on the acer aspire one d250. It happened already once before, stayed for about two weeks, and then "fixed itself". The problem is as follows: the battery can't seem to get fully charged; ie the indicator is stuck at about 90% (it's probably not a software problem -- I have ArchLinux and Windows 7 installed and both report exactly the same) and it never passes that value, but it still shows the status as "charging" (I tried everything I could think of -- leaving it charging for extremely long amounts of time, doing a few complete charge-recharge cycles, removing/reinserting the battery, cleaning the connectors, even updating the BIOS, etc., and nothing helped). Also, when it is getting charged, it charges pretty fast until about 70% and then progresses extremely slowly. The battery holds the charge that appears on the battery indicator normally. Just can't get the battery to charge fully -- I can't get it past the 90%. At first I thought this would be a simple battery failure (even if the computer is not that old, about 6-7 months), but as I mentioned it happened once before, and then one day it fixed itself. I tried contacting Acer about this, but the support was not helpful, completely stupid, it seemed like they used canned responses, the usual. Any thoughts on how to fix this?

    Read the article

  • Windows 7 ssh file server.

    - by Siriss
    Hello all- I have looked at the other posts, but have not quite found an answer I have a question about windows file sharing over SSH. I have copssh installed and it is working for Remote desktop connections. I have port 22 forwarded on my router etc. I connect from a Mac or Putty with this address: ssh -l copsshusername 3391:localhost:3389 [external ip] That works fine. I would like to configure Windows 7 to allow my ssh account that I use to login, access to certain shared folders. I have documents and videos and things that I would like to be able to download externally. I have done this before on Linux and a long time ago on XP, but I cannot figure out what I am missing on Windows 7. There is a designated SSH user that copssh uses to run the service and that I use to to login as. I have googled and googled and have not found a solution that does everything I need that is why I am turning here for ideas. I hope I am explaining this correctly. Thank you very much for your help!

    Read the article

  • Metacity/Compiz not staring upon Login Ubuntu 10.10

    - by Ryan Lanciaux
    TLDR: As of this afternoon, I do not have a window manager when I login to Ubuntu 10.10. I would like to have window manager on login without needing to add to startup. Just started using linux again as my home OS. (Used it for a long time years ago but been on windows up until this past weekend) so this may be kind of n00b-ish :) Anyways, up until today, everything on my machine was running okay. I did not have compiz running as the default wm because I'm running NVidia Drivers and Xinerama (and as I understand Xinerama & Compiz don't work well together). I made no changes to my xorg / etc but today when I logged in, I had to manually start metacity from command line to get any window manager. Really not sure what would be causing this or what I can do to get it working again. My xorg.conf is available here: https://gist.github.com/845618. My default Window Manager is set to /usr/bin/metacity in Configuration Editor under /desktop/gnome/applications/window_manager. p.s. Any tips on how to run 3 monitors where I can move windows between screens without Xinerama would be appreciated but that's prolly for another thread :)

    Read the article

  • IIS7 default document for urlMapped url throws 403 error

    - by MorningZ
    Hopefully this all makes sense: I have a Web Application project against an IIS7 server that is "theme-able" using different master pages. As a result of what I am trying to do, the root of the project has no aspx files, so I am using the web.config's ability to rewrite "~/default.aspx" to "~/themes/a/default.aspx" this works great as long as i type in "http://www.mysite.com/default.aspx", but typing just "http://www.mysite.com" results in a "403 - Forbidden: Access is denied" error I was hoping that the combination of urlMapping and default document would be smart enough to handle this, but it's not <system.webServer> <defaultDocument enabled="true"> <files> <clear /> <add value="default.aspx"/> </files> </defaultDocument> </system.webServer> i also tried <system.webServer> <defaultDocument enabled="true"> <files> <clear /> <add value="~/themes/a/default.aspx"/> </files> </defaultDocument> </system.webServer> to no avail I was hoping a browser would come in without a document defined, IIS7 would assume it was default.aspx, and then the urlMapping would map it accordingly, but nope any pointers? I've read a ton of posts here with similar issues, but not the exact issue

    Read the article

  • Struggling with proper way to setup Permissions on Linux/Apache Web Server

    - by Dr. DOT
    Your expert experience and assistance is great, greatly appreciated here. I have been running a LAMP server for a long time, yet I still struggle with the best way to set file & directory permissions for FTP and WWW protocol activity. My Control panel is WHM/cPanel (not that it makes a difference), and out-of-the box: files are owned by the user account setup in WHM (eg, "abc") files have a group setting of "abc" as well file permissions are created with 644 directories are owned by "abc" directories have a group setting of "abc" directories permissions are created with 0755 Again, these are the default permission settings. Now everything is fine with FTP activity, but please advise me if any of these file/directory settings create issues, especially with security. Here's where my struggle comes into play. I have PHP apps that allow a visitor to create, edit, rename, delete, etc. sub-directories and files in certain selected directories. PHP runs as "nobody" on my server. So in order to get my PHP/Web apps to work, I have had to: chown nobody * chgrp nobody * chmod 0777 * to everything in these certain & selected sub-directories. I know this is probably a huge security whole (so don't ask me for any links :) but how should I set all the permissions to allow my FTP user to do his thing while allowing the PHP apps to do their thing will also "minimizing" any security risks and exposures? I know that big CMS systems like Drupal, Joomla, WordPress and so on, handle this. Thanks ahead of time for reading through this and offering your expert advice!

    Read the article

  • Windows + Django + mod_wsgi = "DLL load failed"

    - by Kyle MacFarlane
    For a long time I was using Python 2.5 to do all this fine but recently upgraded to 2.7 since building stuff for 2.5 is a real pain. I also updated mod_wsgi to 3.3 for Python 2.7. Everything is working fine with Apache + mod_wsgi on CentOS and also in the Django runserver on both Windows and CentOS, but not with Apache + mod_wsgi on Windows. Whenever I try to access a page in my Django app I get the following (note that Apache starts fine): ImportError at / DLL load failed: The specified module could not be found. Which is caused by things like: from Crypto.Cipher import AES Etree and others cause the exact same error and it is not limited to any specific packages. Anything with pyd files fails. Googling around suggests reinstalling Python "for all users", but the installer doesn't give you that option anymore anyway. For good measure I've tried reinstalling Python 2.7 as an administrator and also told it to register itself as the default version of Python but neither helped. I think the solution might have something to do with: The fact that I have 2.5, 2.6 and 2.7 installed on this machine and mod_wsgi might be loading the DLLs for 2.5 instead of 2.7. Something to do with WSGIPythonPath, which I usually don't need to set.

    Read the article

  • Unable to connect to FTP - Connection timeout after MLSD

    - by Afrosimon
    So here is my problem, I'm absolutely unable to connect to a FTP server, in circumstances I've never seen before. Here is the situation : I get a "Connection timed out" just after the MLSD command. I usually use Filezilla, under Ubuntu, but to make sure the problem isn't related to this particular client I tried a few others : gftp on ubuntu and winscp and freeftp on windows 7. All the same result. Also made sure to try with Active or Passive modes. Same result. At this point I would be inclined to think there is something wrong with my current network (furthermore, according to a coworker the FTP server is OK). But I did check with http://ftptest.net/ and I am able to get the directory listing (which I'm not able to through a FTP client). So in the end the last thing I didn't tried is to go on another network, solution which seems would work, but wouldn't be very practical in the long run. And thus I guess there's something wrong with my router... but what could it possibly be? Note : I did try to register and post this question on filezilla's board first... but I can't create an account with a gmail nor hotmail account. WTF?

    Read the article

  • How to publish internal data to the internet - as simple as possible

    - by mlarsen
    I Asked this at Staock Overflow, but I would like your oppinion too as it has as much to do with administration as it does with coding. We have a .net 2-tier application where a desktop program is talking to a database. We support MS SQL Server 2000, 2005, 2008 and Oracle 9, 10 and 11. The application is sold, not as shrink-wrap, but pretty close. It is quite important for us that the installation and configuration is as easy as possible as installation instructions are usually supplied in written form to the customers internal IT-department. Our application is usually not seen as mission critical for the IT-department, so we need to keep their work down to a minimum. Now we are starting to get wishes for a web application build on top of the same data. The web application will be hosted by us and delivered as a SaaS application. Now the challenge is how to move data back and forth between the web application and the customers internal database. as I see it we have some requirements: We must be ready to handle the situation where the customers database is not accessible from the DMZ. I guess the easiest solution is that all communication is initiated from inside the customers lan. As little firewall configuration as possible. The best is if we can run without any special configuration as long as outgoing traffic from the customers lan are not blocked. If we need something changed in the firewall, we must be able to document that the change is secure. It doesn't have to be real time. Moving data in batches every ten minutes or so is OK. Data moves both ways, but not the same tables, so we don't have to support merges. It would be nice if we don't have to roll our own framework completely. Looking forward to hear your suggestions.

    Read the article

  • Games, Windows 8.1 and 144Hz display

    - by Marioysikax
    So I have been having problems with few games after switching from 7 to 8.1, which seems to be related to my 144Hz monitor. Few examples: Shank, Shank 2, Blood of the Werewolf, Astebreed, the Sims 2 and Rayman Legends patch 1.2Had few other as well but it's been long and I have 600+ steam library. From those games at least the Sims 2 and Shank worked without any problems with same setup and Windows 7. So basically these games simply refuse to launch with basic setup. However if I plug 60Hz TV with HDMI instead everything magically starts to work. As for Astebreed and the Sims 2 using windowed mode seems to also work. As for Rayman Demo and version 1.0 works for some reason and 1.2 breaks settings menu. I have already tried contacting supports. EA support stated game simply shouldn't work with 8.1 at all (which is lie as it works with that TV and friend with 8 plays just fine), ubisoft support took few weeks and support said he will forward info for further processing, blood of the werewolf support had no idea what's going on and told me to just use my TV instead.Changing monitors refresh rate to 120Hz or forcing it to 60Hz doesn't do anything. I have DVI right now but I will try with DisplayPort when I get the cable. At PCGW Garrett said it may have something to do how listing resolutions work with 8 compared to earlier Windows versions but my googling skills don't bring anything up and compatibility mode for earlier windows version doesn't work either (not that I expected that to work). My system specs are on my steam profile. How do I get those work with my 144Hz monitor as well as possible future games having same problem? Downgrading to 7 would work but is far from practical and I don't own legit lisence for that one.

    Read the article

  • RD Gateway reporting features/capabilities

    - by Don
    We have just implemented RD Gateway for our own department in preparation for a push to the whole agency for telecommuting. It is all setup and working great, but I was trying to figure out how best to go about monitoring/reporting of users. I see third party software out there that will do it, but is there anything built-in or via powershell/scripting that I could use that would give me a report of the daily activity of users? Something to say, "User A connected at this time, was on for this long, sent/received this much data"? Basically some of the same stuff you can see in event viewer. Ideally I'd like to be able to have this setup so that once a day it emails me with the daily usage for when a supervisor asks about if their person is actually working (or at least online sending and receiving x amount of data), I'll have some metrics to give them. I realize that actual work output is relevant and more of a managerial issue, but I would like to be able to offer as much as I can from my end when asked. Thoughts? Thanks!

    Read the article

  • Windows 7 DVD doesn't boot up, neither does USB. :'(

    - by Manan Shah
    My problem is that i'm not able to install windows 7. Been trying to install this since past 1 week. The methods i've tried are: I have a windows 7 bootable DVD which doesnt boot up. (I've set BIOS to boot from DVD ROM first but it just won't boot from the DVD). Tried to install Windows 7 from the same DVD to a friend's PC and it worked. So the DVD has no issues. I tried to run 'Setup.exe' from within the DVD. The two options pop-up 'Check compatibility' and 'Install now'. On clicking install now, after sometime, an error is encountered with the message 'Windows was unable to create a required installation folder' error code:0x8007000D. I am running Windows XP Professional and there's only one user on the PC which is the Admin, so i do not know why is the setup not getting permissions. I've also uninstalled my antivirus, CD burning software, disabled firewall and disconnected all other devices, but its still the same. I tried to install it from a USB device by making it bootable but that too doesnt work. (Yes the mobo supports booting from the USB). The problem is that XP does not recognize a 'USB' device on boot. Rather it shows this USB stick as a removable 'Hard Drive'. Furthermore, i changed the order of Hard Drive boot to boot from this removable Hard Drive first, it still boots my existing OS. Is there anything else that can be done? Any help would be greatly appreciated. :) Please ask if any other information is required, this post is becomimg increasingly long to add any other details. PS: I want to dual boot windows 7 with my existing XP, but that would be after i manage to run the windows 7 setup in the first place. PPS: Please bare with any 'not-so-technical' terms, i am a beginner with this. Again, thank you for taking the time and trying to help, really appreciate it. :)

    Read the article

  • RDP Connection to Windows 7 stays really slow

    - by Pavlo
    I have an Issue with connecting to Windows 7 via RDP. I can open an RDP Session, but regardless of any settings, the responce times are really long. This in particulary is the case when opening a web page in a browser. I've tried IE, Firefox and Google Chrome. I also use RDP connection to a Windows 2008 Server from the same client machine, and the speed is very normal with all features turned on. We have Gigabit Ethernet here. So I think it can not be the client's fault. What concerns Windows 7 Machine, I've tried shutting all the sraphic features off and turning the color levels to 256 colors. Result - the same. If I work locally on the machine - I can not see any lags. What else have I tried: Using old RDP 5 Client from Microsoft Setting network autotuninglevel as seen here Do You have some ideas? Thanks in advance! Update the problem seems to be with rendering window contents. All the window borders and pannes are rendered pretty quickly, but the content shows up very slowly. Also mouse movements are recognised by the Win 7 box only after some period. Are there some hidden settings in the RDP, where one could turn some advanced features off or turn some caching on? I use Bitmap Caching, but this apparently doesn't help.

    Read the article

  • Sudden and frequent hangs on desktop computer: mobo or CPU fault?

    - by djechelon
    I have a desktop computer equipped with an ASUS Crosshair 2 Formula and a Phenom x6 3.2GHz CPU. My problem is that often the computer will hang all of a sudden, completely stopping responding. When that occurs, reset key is inoperative and power button turns the computer off but is unable to turn it back on. I have to physically disconnect power cable. The problem can occur anytime, when I'm booting Windows, when I'm logging in, when I'm listening to a song, when I'm browsing Internet, etc. It always occurs after very few minutes of 3D gameplay I thought it was a video card fault. I had 3 8800GTX so I could try all combinations of them: didn't fix I thought it was a RAM problem: I tried running with only a subset of my DDR2 banks but didn't fix. Almost every time I have to reset and reconfigure BIOS (without AHCI, Win7 won't boot, so I need to restore a few things). If I enable AMD Live, Cool&Quiet or other things from CPU configuration menu I'll be sure that the computer won't reach Windows desktop in 99% of cases (it randomly hangs somewhere in the boot process or even in the BIOS POST). Another interesting thing is that during the POST process the computer always takes unusually long time detecting USB devices (LCD POSTer shows USB INIT), and I've also tried disconnecting all USB devices but didn't take less time to POST BIOS revision is 2702, the latest. Today I found a different behaviour once: during boot screen I got a BSOD with error Stop 0x00000101 A clock interrupt was not received on a secondary processor within the allocated time interval, and this is usually related to overclocking, but I never overclocked my CPU. Judging from the description of my problem, hoping someone had the same and fixed, and since I don't have a spare CPU or motherboard for replacement, I'd like to ask if you think this is a problem with faulty CPU or faulty motherboard, and if I can perform additional tests (I mean software tests because of my lack of spare components) to identify the component to replace.

    Read the article

  • How to get rid of "Maxback Engine" for good?

    - by Jonik
    I used to have a Maxtor Shared Storage II network drive; it broke down long ago already. (Later I tried to recover some data from it, and partially succeeded, but haven't yet fully documented it on that question.) Anyway, I just noticed there are still some lingering bits remaining of the (thourougly crappy) software that came with the Maxtor device: a background process called "MaxBack Engine". I googled around a bit and found something related but not very useful: http://www.straitmac.com/jforum/posts/list/600.page http://discussions.apple.com/thread.jspa?threadID=725692 Under /Applications I found "Maxtor EasyManage.app" which I used to use for controlling the drive, and showed it some "rm -rf". Before deleting, I noted that the bundle did contain "MaxBack Engine.app" under Content/Resources. But still, after reboot, the "MaxBack Engine" process is back. I did notice though that it only appears when logging in with my usual user account; with another account it wasn't launched. So, dear Mac gurus, what could I do about this pest? I guess I could fall back to some Unix hackery and write a cronjob that kills any process with that name, but obviously it'd be nicer to be able to clean up from my computer everything left behind by Maxtor's piece of software.

    Read the article

  • Moving server room to another part of the building

    - by PHLiGHT
    This question is a bit different than the typical we are moving our server room to an off site location or we are moving the whole office to a new building. Management wants to add some more office space and to do so they want to move the server room to another location. The server room has Verizon smart jacks, a few servers, PBX and all the office network drops go into this room. I'm going to go over there to scout out an alternate location for the equipment because that is still TBD. This sounds like quite a pain since the Verizon equipment for our MPLS will need to be moved (never done that) and the office jacks will need to be re-run. How do you handle the jacks? I was thinking of keeping them in the same location and having new wall plates put in with half the ports going to the current location and the other half to the new location. Or do you think that 40 drops could just be done over the weekend so the old stuff would be ripped out and replaced with the new? Currently the wiring is a mess so this could be a blessing in the long run.

    Read the article

  • Best Practices for adding Exchange Archive to current 3 server setup

    - by ADquestion
    I'm looking to add an Archive Database (which I know is just a Mailbox Database) to our current Exchange 2010 environment. I have done this in the past at a previous job, but we had a simpler setup than at this current job. I've been trying to find some best practices to make sure it's setup in an ideal way, but so far not finding the details I would prefer. Hoping someone on here can give me a few pointers. Currently we have a 3 server setup, Server1, Server2 and Server3. Three databases of course, DB1, DB2 and DB3. We have a DAG setup between them. Server1 has DB1 and DB3 on it, DB1 is not active, DB3 is active. Server2 has DB1 and DB2 on it, both are active. Server3 has DB2 and DB3 on it, both are not active. All three servers are virtual (VMware). Each one is setup identical to the other as follows: C:\ 60GB - OS E:\ 600GB - DB (currently only 90GB used, pointing to Datastore just for Server2) F:\ 200GB - Log (2GB used, pointing to same Datastore as above) G:\ 200GB - Restore (0 used, pointing to same Datastore as above) The drives are all set to Thin Provisioning, and it looks as though I have 600GB of available space. They have not been on Exchange that long and only have about 70GB worth of PSTs to import back in that will be going to the Archive Database, plus anything older than 2 years from their current inbox that will be moved into there. I was considering placing the Archive DB on the E:\ drive of Server3 (only) like the current DB, but wasn't sure if that was acceptable. I don't plan on setting the Archive DB up with the DAG, just plan on having it as a single repository for older emails and manually back it up every now and then. If anyone has any suggestions on this I would appreciate it the input. I've done it on a slightly smaller scale before and it worked well, but like to think it through before pulling the trigger, especially at a new job. :) Thanks again!

    Read the article

  • Intermittent 5.7.1 email bounce to Exchange 2007

    - by Steve Kennaird
    My knowledge of Exchange isn't particularly great, so excuse me if some of the terminology I use isn't quite right. I'm primarily a web developer who's now responsible for a small business's network. We have a server running SBS 2008 and Exchange 2007. Generally, everything works well, emails are able to be sent to both internal and external domains without issue. We've only got ~20 users, Exchange is sitting on a single server. I use SendGrid to send emails generated by our externally hosted website to users in the office. Primarily, order notifications are sent to [email protected]. Without any pattern and less than once per week on average, an email to [email protected] will bounce back, and the logs on SendGrid detail the following error: 550 5.7.1 Unable to relay for [email protected] Either side of that failed delivery attempt, I'm able to send and receive emails to/from [email protected]. Having done some research, incorrect reverse DNS seems like it could be a cause of intermittent bounces like this. Having used nslookup, I have found that the reverse DNS doesn't map like it should, e.g. Office IP: 135.325.351.123 (made up IP, for example only) Domain: office.somedomain.com (made up, for example only) Reverse DNS: somedomain.gotadsl.co.uk (half made up) Could this be a cause? I'm sure that the IP address and the domain should map to each other. Also, it has been suggested to me that as the Exchange server is on a network with an ADSL connection, that could be a potential cause as the connection "goes up and down all day long". I don't have an opinion on this, as I don't have enough knowledge of Exchange/ADSL to form a reliable opinion. Can anyone offer any insight as to whether one or both are actually potential causes, or if there is another possible cause?

    Read the article

  • Monitoring AWS Systems Behind ElasticBeanStalk

    - by A. Avadis
    So I'm getting a company set up in the Amazon Cloud -- creating IAAS protocol/solutions/standardized implementation, etc while also being the SysAdmin for individual systems, app environments, and day-to-day uptime. One of the biggest issues I'm having is tracking various system/application logs, as well as logging/monitoring/archiving system metrics like memory usage, cpu usage, etc etc In a centralized fashion. E.g. -- Nagios + Urchin. The BIGGEST impediment to my endeavors is the following: The company application is deployed in the form of a Java *.WAR file, uploaded to an Elastic BeanStalk application environment, load balancing and auto-scaling between 3(min) and 10(max) servers, and the EC2's that run the application are fired up and disposed of ad-hoc. That is to say, I can't monitor the individual EC2's for very long because so many are being terminated then auto-provisioned/auto-scaled on the fly -- so I'd constantly be having to "monitor what I'm monitoring", and continuously remove/add EC2 machine addresses to my monitoring lists. IS there some sort of way to use monitoring tools like Zabbix or Nagios to monitor the ElasticBeanStalk, and have it automatically add on new EC2's, and remove terminated/failed EC2's from its monitoring list automatically? Furthermore, is there anything I can do with GrayLog to achieve similar results with the aggregation/centralization of my application logs from multiple EC2 instances into ONE consolidated set of logs/events? If not GrayLog, is there ANYTHING LIKE GrayLog that can automatically detect what EC2 members are being added/removed from the environment, and collect the logs from them automatically? Any and all advice or direction is appreciated. Thanks much, and cheers!!

    Read the article

  • What performance degradation to expect with Nginx over raw Gunicorn+Gevent?

    - by bouke
    I'm trying to get a very high performing webserver setup for handling long-polling, websockets etc. I have a VM running (Rackspace) with 1GB RAM / 4 cores. I've setup a very simple gunicorn 'hello world' application with (async) gevent workers. In front of gunicorn, I put Nginx with a simple proxy to Gunicorn. Using ab, Gunicorn spits out 7700 requests/sec, where Nginx only does a 5000 request/sec. Is such a performance degradation expected? Hello world: #!/usr/bin/env python def application(environ, start_response): start_response("200 OK", [("Content-type", "text/plain")]) return [ "Hello World!" ] Gunicorn: gunicorn -w8 -k gevent --keep-alive 60 application:application Nginx (stripped): user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; upstream app_server { server 127.0.0.1:8000 fail_timeout=0; } server { listen 8080 default; keepalive_timeout 5; root /home/app/app/static; location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://app_server; } } } Benchmark: (results: nginx TCP, nginx UNIX, gunicorn) ab -c 32 -n 12000 -k http://localhost:[8000|8080]/ Running gunicorn over a unix socket gives somewhat higher throughput (5500 r/s), but it still does't match raw gunicorn's performance.

    Read the article

  • Remote mouse pointer not visible in VNC

    - by aef
    I used VNC desktops as a kind of collaboration server, as shared planning and pair programming environment for a long time. Now my latest iteration uses a KVM guest running Fedora 17 "Beefy Miracle", the Cinnamon desktop environment and an X11VNC server. The X11VNC server is automatically started with the desktop environment using the following command: x11vnc -localhost -many -shared -display :0 -bg My problem is that depending on the VNC client, the mouse pointer of the remote system which is shown through VNC is not synchronized to my client. I really need this, so I can see what my partner is doing on the desktop. When using Vinagre 3.2.1 on Ubuntu Oneiric Ocelot (11.10) or Vinagre 2.3.0.3 on Debian Squeeze (6.0) and I don't have my local mouse pointer inside the VNC view, I cannot see the mouse pointer of my remote system, nor its movement. When using TightVNC on Windows 7, I can recognize a mouse pointer trace for very short amounts of time after moving the mouse, but it is not clearly visible. Using UltraVNC on Windows 7 the mouse pointer is clearly visible all the time. With Gnome 2 I never had any problems with remote pointer synchronization, using exactly the same clients. I suspect this could have something to do with Cinnamon's dependency on 3D acceleration. On the other hand, it doesn't change anything to start Cinnamon's fallback environment Cinnamon 2D. Update: Same effect when I use Gnome 3.

    Read the article

  • Finding matching columns in excel

    - by fakaff
    I've never used excel before so I need the simplest solution available, and this is a work assignment due this week so I didn't have time read much of the documentation. Basically, I have two tables, A and B, and they are both thousands of rows long. Description of my task: right now (since I don't know better) I'm manually doing this: Go to row i in table B. Select entries in columns B(a, b, c) of that same row. Look for a row in table A where column A(b) matches row B(a). Paste the entries of columns B(a) of row i at the end of the row found in the last step. Repeat for row i + 1. Example: row B(cat, dog, mouse) matches A(mammal, cat, Mr. Whiskers). So I would paste B after A and have A(mammal, cat, Mr. Whiskers, cat, dog, mouse). Note: I am not joining tables. I am merely extending table A by pasting row A(b) if row A(b) matches row B(a). Also, sometimes entries are spelled slightly differently. Using wildcards to search for candidates would be of help. As the description should let on, this task is very tedious and inefficient if I don't know how to automate some operations (there are thousands of entries). Any quick tips as to how to be more productive is a big help.

    Read the article

  • ESXi 5 VM Putty session hangs, vSphere client timing out

    - by user192702
    First of all I believe this is a ESXi issue but let me know if you have seen this. It started about a year ago when I noticed occasionally when I putty via SSH to my VM guests, if I do anything that makes it to display a lot of things at once, the session will hang and I have to start a new one quite often only to find the same behaviour. What I meant by display a lot of things can be any of the following: 1) tail -f filename 2) Paste a long command 3) less filename If I type in one character at a time this won't happen. I tried searching online and it always point me to flow control settings and the various suggestions I've tried have never been able to resolve the issue. Since last week, I've noticed I'm not able to connect to my POP3 server from Outlook (it's timing out from Outlook's perspective). Today I tried to connect to the ESXi via vSphere client and it gives me a time out also. Exact behavior and error I saw is similar to the one posted at the following URL but the suggested technique also failed to resolve the issue. http://davidcocke.blogspot.hk/2012/02/unable-to-login-with-vsphere-client.html Has anyone experienced this before? Any suggestions on how to troubleshoot this?

    Read the article

  • Can Internet data be used by malware when PC off?

    - by Val
    I have noticed over the last month that my off peak data has been used at a rate of approx 350MB per hour - this has meant that I have gone over my quota and slowed down by my ISP to 256k. There is no one in the house using it (2am-8am is my ISPs off peak hours) at that time. My PC and other wireless devices (ipad and iphone) are turned off. I have changed the wireless password on my modem 3 times and it is now 30 digits long. So I don't think someone else is using my wireless access between 2-8am. It has been suggested by my ISP that I may have malware/spyware on my computer. Sorry for my ignorance, but can malware still run if the PC is off? I did look at my modem's log and followed an IP address to a service called Amazon Simple server Storage. Could this company possibly be the culprit? I am not too tech savvy, so any assistance appreciated. I have run a barrage of spyware cleaning software eg malware bytes; spy bot etc.... Cheers Val

    Read the article

  • How should I perform database maintenance on a 24x7 system

    - by solublefish
    I'm a software developer who inherited a part-time DBA role. I'm responsible for an application backed by a small, high-volume 24x7 database on SQL Server 2008. While there's other stuff in the DB, the critical piece is a 50GB, 7.5M row table that serves 100K requests/sec during peak load, and about half that at "night". This is 99%+ read traffic, but the writes are constant, and required. I need to be able to perform periodic maintenance without a maintenance window. Say an index rebuild, a job to purge old data, Windows Update, or hardware upgrade. Most of the advice I've seen is along the lines of "MAKE a maintenance window." While I appreciate the sentiment, I hope there's another way. If it will solve this problem, I do have the ability to purchase new hardware or modify the database, the clients (a set of web services servers), and much of the application code (ADO.NET + ASP.NET). I've been thinking along the lines of using the warm spare (or a 3rd server) to do the maintenance, and then "swap" it into production. 1 Synchronize the spare by restoring backups, including a current transaction log. 2 Perform the maintenance tasks. 3 Reconfigure clients to connect to the spare server. Existing connections are finished within a minute or so. 4 The spare server is now the production server. The problem remaining is that the new production server is now out of date by however long it took to perform maintenance. Is there some way that the original production server can be made to queue up changes and merge them to the spare between steps 2 and 3? Any other ideas?

    Read the article

  • Sane patch schedule for Windows 2003 cluster

    - by sixlettervariables
    We've got a cluster of 75 Win2k3 nodes at work in a coarse grained compute cluster. The cluster is behind a mountain of firewalls and resides in its own VLAN. Jobs of all sizes and types run on the cluster and all of the executables running are custom-made. (ed: additional notes on our executables) The jobs range from 30 seconds to 7 days in duration, and may contain one executable or 2000 sub-jobs (of short duration). Obviously we are trying to avoid the situation where our IT schedules a reboot during a 7 day production job. We have scheduling software which accomodates all of the normal tasks for a coarse grained cluster and we can control which machines are active for submission, etc. If WSUS was in some way scriptable (or the client could state it's availability for shutdown) we could coordinate the two systems and help out. Currently, the patch schedule is the Sunday after Super Tuesday regardless of what is running on the cluster. We have to ask for an exemption every time we want to delay patching a machine for a long running production job. Basically, while our group is responsible for the machines we have little control over IT's patch schedule. Is patching monthly with MS's schedule sane for a production Windows cluster? Are there software hooks in WSUS where we could say, "please don't reboot just yet"?

    Read the article

< Previous Page | 625 626 627 628 629 630 631 632 633 634 635 636  | Next Page >