Search Results

Search found 18756 results on 751 pages for 'generate images'.

Page 587/751 | < Previous Page | 583 584 585 586 587 588 589 590 591 592 593 594  | Next Page >

  • Using Toshiba 22EL833 as PC display with GTX8800

    - by Oleg V. Volkov
    I had another Toshiba TV - 19SL738 - connected to this same PC and video card (GTX 8800) through DVI<-HDMI (DVI on PC side, HDMI on TV) before, that was working perfectly at it's native resolution 1360x768. Some time ago I had to change to 22EL833 and immediately faced problem with Windows 7 control panel and NVIDIA control panel both reporting native resolution for new TV as 1080i, 1920x1280, despite TV documentation saying that it have same 1360x768 as previous one. Practical tests confirmed that true native resolution is indeed 1360x768, because plugging in through DVI<-VGA and setting custom resolution through NVIDIA panel shown clear colors and crisp image, while setting anything different with either DVI<-VGA or DVI<-HDMI produced horribly distorted or squished images, with almost unreadable slim lines (as in letters, for example). Now, my problem is that there's no drivers for this TV and I'm unable to get good image while connecting it through DVI<-HDMI directly. The best I've achieved is editing EDID/driver manually, to persuade system that native resolution should be 1360x768, and while image became mostly clear, colors turned to some strange washed out effect, with pools of pure yellow, cyan and magenta there and there filling place of other colors. Gradients also became noticeably stripped as well. Somehow it looks like dithering gone bad and makes me suspect that image is still down/upscaled several times internally somewhere along the line. How can I connect this TV to DVI output of my video card to get best possible clear image, correct colors and correct native resolution?

    Read the article

  • What are the most important aspects to consider when choosing a SAN for a small office virtualizatio

    - by Prof. Moriarty
    I am in the process of consolidating 6 physical servers running 6 different operating system flavors (don't ask) into two identical physical servers (Dell PowerEdge 2900), using the free VMware ESXi 4.0 platform. We will install an iSCSI SAN over a 1GbE network, and store all virtual machine images on the SAN. Each physical server would run 3 VMs, and in the case of a physical server failure, we would manually switch over the other 3. These are all internal servers, while important, they can tolerate some amount of downtime (say <1h) to keep cost and complexity associated with HA down. I now need to choose the SAN to be used for the setup, on a low budget. We currently have about 2TB of data, but of course I want to able to grow, do backups of VM snapshots on other drives and remove them to a different location, etc. So what I would like to know is: Which are the must have features for this setup, without which using a SAN is not worth it? We are mostly a Dell shop, so I have been looking at the EqualLogic PS4000E High Availability model. Any opinions, anecdotes, bad experiences with this model? (This is one of the few models which could accomodate our existing disks from the physical servers.) If you can recommend something that is not Dell, but it has better value, I would most definitely consider it. Caveats, things to look out for?

    Read the article

  • Looking to replace Ghost with FSArchiver or Clonezilla, few questions about capabilities

    - by Daniel Wright
    I work for a PC Repair company and we are looking into setting up a dedicated machine with externally accessible SATA bays to clone harddrives as a safety net incase something goes wrong during a repair. We currently use a SATA/PATA to USB bridge called MagicBridge and Norton Ghost on any workstation, but we're looking to move away from Ghost. We have a computer with a large RAID5 array with Windows Server 2008 Standard currently installed, but this can be replaced with a flavour of *nix. I have some experience with Clonezilla, but FSArchiver also seems like a suitable replacment too. My Head Technician wants to know if my chosen solution (probably Clonezilla or FSArchiver, but I'm open to free suggestions) is capable of: Cloning a degraded RAID, such as a single drive from a RAID1 mirror without complaining Producing images that are easily mountable (he'd prefer them to be mountable in Windows, but if there is no other easy way, *nix should be fine) akin to Ghost Explorer so individual files can be restored as well as being able to do bare metal restores. My apologies for wordiness but I wanted to be thorough in my explaination. Thanks for any suggestions or tips :) EDIT: I've just found out that Clonezilla has a workaround for cloning RADI1 drives EDIT2: Found the answer to both of my questions, aparently I wasn't phrasing my searches right, could this question be deleted please?

    Read the article

  • Why is my mono/XSP site loading slow?

    - by acidzombie24
    I have two sites on the same server. One is loading perfectly and incredibly fast. The other is an equally complex site except a bit less javascript and 0 images. Its taking several seconds to load and there is a 1 in 3 chance that i get a Http500 error. WTF I grabbed the lastest 2.6.? version of mono, mod_mono and xsp (libgdiplus-2.6.7, xsp-2.6.5, mod_mono-2.6.3 and mono 2.6.7) This is whats in apache error.log [Mon Jan 03 19:33:40 2011] [error] (70014)End of file found: read_data failed [Mon Jan 03 19:33:40 2011] [error] Command stream corrupted, last command was 1 [Mon Jan 03 19:34:52 2011] [error] (70014)End of file found: read_data failed [Mon Jan 03 19:34:52 2011] [error] (70014)End of file found: read_data failed [Mon Jan 03 19:34:52 2011] [error] Command stream corrupted, last command was 1 [Mon Jan 03 19:34:52 2011] [error] Command stream corrupted, last command was 1 this is the page error Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, webmaster@localhost and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log. Apache/2.2.9 (Debian) PHP/5.2.6-1+lenny9 with Suhosin-Patch mod_mono/2.6.3 Server

    Read the article

  • Cloudfront - How to invalidate objects in a distribution that was transformed from secured to public?

    - by Gil
    The setting I have an Amazon Cloudfront distribution that was originally set as secured. Objects in this distribution required a URL signing. For example, a valid URL used to be of the following format: https://d1stsppuecoabc.cloudfront.net/images/TheImage.jpg?Expires=1413119282&Signature=NLLRTVVmzyTEzhm-ugpRymi~nM2v97vxoZV5K9sCd4d7~PhgWINoTUVBElkWehIWqLMIAq0S2HWU9ak5XIwNN9B57mwWlsuOleB~XBN1A-5kzwLr7pSM5UzGn4zn6GRiH-qb2zEoE2Fz9MnD9Zc5nMoh2XXwawMvWG7EYInK1m~X9LXfDvNaOO5iY7xY4HyIS-Q~xYHWUnt0TgcHJ8cE9xrSiwP1qX3B8lEUtMkvVbyLw__&Key-Pair-Id=APKAI7F5R77FFNFWGABC The distribution points to an S3 bucket that also used to be secured (it only allowed access through the cloudfront). What happened At some point, the URL singing expired and would return a 403. Since we no longer need to keep the same security level, I recently changed the setting of the cloudfront distribution and of the S3 bucket it is pointing to, both to be public. I then tried to invalidate objects in this distribution. Invalidation did not throw any errors, however the invalidation did not seem to succeed. Requests to the same cloudfront URL (with or without the query string) still return 403. The response header looks like: HTTP/1.1 403 Forbidden Server: CloudFront Date: Mon, 18 Aug 2014 15:16:08 GMT Content-Type: text/xml Content-Length: 110 Connection: keep-alive X-Cache: Error from cloudfront Via: 1.1 3abf650c7bf73e47515000bddf3f04a0.cloudfront.net (CloudFront) X-Amz-Cf-Id: j1CszSXz0DO-IxFvHWyqkDSdO462LwkfLY0muRDrULU7zT_W4HuZ2B== Things I tried I tried to set another cloudfront distribution that points to the same S3 as origin server. Requests to the same object in the new distribution were successful. The question Did anyone encounter the same situation where a cloudfront URL that returns 403 cannot be invalidated? Is there any reason why wouldn't the object get invalidated? Thanks for your help!

    Read the article

  • Location directive in nginx configuration

    - by ryan
    I have an nginx server setup to act as a fileserver. I want to set the expires directive on images. This is how a part of my config file looks like. http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; location ~* \.(ico|jpg|jpeg|png)$ { expires 1y; } include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } I get the following error when I reload config - "Location directive not allowed here". Can someone tell me what the right syntax for this is? Thanks in advance. EDIT : Found the answer myself. Added it in a comment. Closing this.

    Read the article

  • Windows and file system abstraction - how much does it matter where something comes from?

    - by deceze
    I have come across the following phenomenon and would like to know how leaky Windows' file system abstraction is or if there's something else involved. I partitioned the hard disk of my MacBook Pro and installed Windows 7 (64 bit). The Bootcamp driver package includes file system drivers (right term?) that enable Windows to access the Mac OS HFS+ partition. AFAIK it's a read-only access, but it works. Now, I have some disk images of stuff I usually install, so I grabbed a copy of Daemon Tools to mount them. When I mount an image saved on the HFS+ partition, about two out of three installers on these disks (usually InstallShield) crash with all sorts of weird errors. Most are just gibberish that lead to all sorts of non-solutions on Google, one was "This application is not the right type for your computer, check if you need 32 or 64 bit versions." When moving the image files to another Windows 7 computer on the network and mounting them from the network share, they work fine. My question now is, why do applications behave differently depending on whether the read-only image file, which should be abstracted away through the read-only virtual Daemon Tools drive, is located on a read-only HFS+ partition or on a Windows network share? And I'll just roll this into the question as well since I was wondering: Does the file system of a network share matter? Does the client system need to understand the file system of the share host or is that abstracted away in SMB?

    Read the article

  • Excluding files from web logs

    - by Ray
    I originally tried this question on StackOverflow, but it was suggested that serverfault was a better choice. So, here it is... Looking through my web logs, I see a lot of entries that don't interest me. Some of them are commonly used images, css files, and scripts, which I can easily exclude by un-checking the 'log visits' check box in IIS for the folder properties. I would also like to exclude log entries for certain common requests which are not in their own folders. Mostly, 'favicon.ico'. 'scriptresource.axd', and 'webresource.axd'. These (especially scriptresource.axd) make up almost a third of a typical log file on my site. So, the question is, how do I tell IIS not to log these requests? And is there any reason that this is a bad idea? The purpose of doing this is to reduce the log file size and the amount of work the server has to do, to make the log file more manageable when I need to dig in to them for troubleshooting, and for my own curiosity. I realize that log file parsers can skip the junk, but I am interested in reducing the raw files, before parsing.

    Read the article

  • How can I trigger the creation of a new CLB file?

    - by Xperimental
    I'm currently having a problem with an application using COM running on Windows Vista. The application runs ok on one machine, but doesn't work on a similar configured machine. Both machines are virtual images originating from the same source image. While searching the registry for causes of this error, I came across the CLBVersion key in HKCR\CLSID which seems to have something to do with COM. The value of the key differs between the two machines (0x6 on the erroneous one, 0xc on the working one). Also there are files containing the same number in their filenames in the %SystemRoot\Registration directories of the machines. They are called R000000000006.clb and R00000000000c.clb respectively. I have already searched the windows event log for anything leading to the creation of those files (I have searched by the creation date of the files). Now a few questions regarding the registry keys and the files: Is it correct, that this is connected to COM? What is the function of the files? What causes the creation of a new "CLBVersion"? Is there a way for me to trigger the creation of a new CLB file? edit: I have now found out, that this has nothing to do with my application error. But I would still be interested in details about the registry key and the files. An installation of Visual Studio 2005 has brought the second machine to the same configuration (0xc in registry and file) as the other one.

    Read the article

  • HTTPS and Certification for dummies

    - by Poxy
    I had never used https on a site and now want to try it. I did some research, but not sure that I understood everything. Answers and corrections are greatly appreciated. Here we go: To use https I need to generate ‘private’ and ‘public’ keys for the web server I use. In my case it’s apache (manual: http://httpd.apache.org/docs/2.0/ssl/ssl_faq.html) Https protocol should be bind to port 443. Q: How to do it? Is it done by default? Where can I check configuration? Aplying https. Q: If I see https in browser does it mean that the data traffic on the page IS encrypted? Any form on the page would submit data via https? Though all the data gonna be encrypted, the browsers would still show ugly red messages. This is just because they do not know anything about my certificate. They have about a hundred certificates pre-installed but mine is not one of them, obviously. But the data IS encrypted by https. If I want browsers to recognize my certificate, I would need to have it signed by one of the certification authorities (ca) that has its certificate pre-installed (e.g. thawte, geotrust, rapidssl etc). UPD: To reed about ssl/tsl: The First Few Milliseconds of an HTTPS Connection, I found it very informative. Examples for PHP (openssl.org) of how to make use of ssl/tsl on the server side are published here.

    Read the article

  • Managing disk in a VM

    - by dst
    I'm replacing my two old rack servers with a new one that has plenty of power to take over the functionality my current servers. The server is a 4U rack mount with 16 3.5" SAS drive bays, two 2.5" bays, a Xeon E3-1230v2 CPU and 32GB of ECC RAM. My issue is the following. I would like to have a FreeBSD file server with ZFS managing disks. However, I need other VMs for e.g. a shell/git server, mail server etc. I'm wondering how to deal with the following issues: I want ZFS to fully manage the disks, so I'm not using any hardware RAID. Should I pass the SAS controller directly to the FreeBSD system as passthrough PCI? I want to maximize the reliability of the setup. On what disks should I install the hypervsor and keep server system disks? For (2) I have the option of having a RAID setup on the SAS controller and using that as system disk to store the hypervisor as well as VM images. However, this makes PCI passthrough to the file server impossible. Another option is using the two 2.5" bays. In terms of reliability how are SSDs compared to e.g. WD RE4 disks? Would it make sense to have two SSDs in software RAID as boot disks for the hypervisor or should I just go with e.g. WD RE4 disks in a software RAID setup. I also need to think about where to store the mails for the mail server, but this could be done over NFS between the VMs. BTW, this is for home use, so the load is not really that big. What I'm looking for is best practices for splitting up a server.

    Read the article

  • Copy a website and preserve the file & folder structure

    - by DrStalker
    I have an old web site running on an ancient version of Oracle Portal that we need to convert to a flat-html structure. Due to damage to the server we are not able to access the administrative interface, and even if we could there is no export functionality that can work with modern software versions. It would be enough to crawl the website and have all the pages & images saved to a folder, but the file structure needs to be preserved; that is, if a page is located at http://www.oldserver.com/foo/bar/baz/mypage.html then it needs to be saved to /foo/bar/baz/mypage.html so that the various Javascript bits will continue to function. None of the web crawlers I've found have been able to do this; they all want to rename the pages (page01.html, page02.html etc) and break the folder structure. Is there any crawler out there that will recreate the site structure as it appears to a user accessing the site? It doesn't need to redo any of teh content of the pages; once rehosted the pages will all have the same names they did originally so links will continue to work.

    Read the article

  • Are there any open source reseller packages?

    - by Tom Wright
    My department has just been given the right/responsibility to manage our own VPS. The idea being that the bureaucracy will be less for the many small web projects we run. Since each project will be managed by a different team, I was planning on approaching a shared hosting model. Are there any free pieces of software that would help automate the provision of resources each time a team request a new project? Most of the projects have identical requirements - basically LAMP - so it would be these resources that I would want provisioning (and de-provisioning, if that is a word) automatically. Ideally, there would also be a way to hook it into our LDAP authentication backend too, though I could probably make this sort of modification if necessary. Since we won't be charging our "client" however, we won't need the ability to generate invoices, handle payments, etc. etc. EDIT: Sample workflow Login authenticated against LDAP Username checked against admin group (not on central LDAP) Click 'new project' and enter project name User created on VPS with project name as username Apache virtual host created and subdomain (using project name) allocated FTP & MySQL users created

    Read the article

  • SQL Server Offsite Backups

    - by Eric Maibach
    We have about !TB of SQL Server databases, and these databases generate about 200GB of data changes each day. Up to this point we have been doing Weekly full backups, daily diff backups, and hourly transaction log backups. The full and diff backups are backed up to tape and taken offsite each day. We have been trying to move away from tapes, and our IT department purchased a Barracuda Backup device that backups up data and then sends it offsite using our internet connection. I have been trying to get this to work for our SQL Server backups, and have ran into a number of problems. I normally like to just use SQL Server to perform backups instead of trying to use a agent, so that is what I tried first. However the Barracuda device was not able to dedup these files very well, so it ended up being to much data to try to send offsite and to archive. I then tried installing the Barracuda agent and using it to backup the SQL Server databases. However the problem I am having there is that on some of the database servers I also have files that need backed up, and I cannot find a way to create seperate backup schedules for the file backups and the SQL Server backups. Barracuda only does full or transaction log backups. So if I want to do hourly transaction log backups I end up doing a file system backup every hour (which is not good), or if I only schedule the backups to run once a night I either have to do a full backup every night, or only do a transaction backup once a day. None of these scenarios are good options. My question is, how is everyone else getting their large SQL Server database backups offsite. Are you just using tape, or have you found a offsite backup device that works well? Is anybody else using Barracuda to backup their SQL Server databases? If you do, then how do you have it setup?

    Read the article

  • Generating new SID for Windows 7 cloned partition in Linux?

    - by Jack
    So I've read that the proper way to clone a Windows 7 partition is to run a Sysprep after the clone is complete. For MANY reasons, this is not possible the way we are cloning these drives (long story short, the drive should be fully up and running after we clone it, with all the settings already there and requiring no user intervention; and no, not even an answer file would work because the way we customize all the Win7 settings is complex and we do not want the user touching the settings). I understand Microsoft will not support Windows 7 clones if it is not sysprepped and that is fine for us. Acronis recovery tools get around this by ticking an option called "Create new NT signature", which resets the SID and GUID on any restore. Symantec has a tool called Ghostwalker which does the same thing. However, we are looking for a way to do this in Linux because we want to use open source tools to do the imaging (fsarchiver, partclone, etc. basically the same tools Clonezilla uses internally to clone NTFS partitions). The question is, if we clone using these tools in Linux, how would we generate a new SID thereafter (without the use of sysprep)? Is there any way to do it within a Linux environment? The whole image process is automated so if it is a simple command that I can just throw in my shell script, that would be even better. Of course, it would be nice to know if this is even possible. Any ideas? EDIT: Forgot to mention that the target machines we are restoring the image on are EXACTLY the same.

    Read the article

  • Parallels Plesk returning strange numbers

    - by Jack W-H
    Hi everyone, As a relatively new Server Admin I've become a bit confused by some statistics Parallels Plesk Panel 10.0.1 is returning to me. I have a domain ('subscription') set up, mysite.com. Mysite.com only hosts files, mostly images Its file contents use up about 390MB of disk space Here's a screenshot: this is what Plesk is reporting mysite.com to use: And some more info: Now this is pretty confusing... I thought at first my site might have been hacked and had contents written to disk, but I checked and all is in order, nothing has been hacked into as far as I can tell. So I had a look in the site's CP for some more in-depth statistics, and this is what's returned... Now - sod's law - when I go to check my disk space statistics in more depth via the control panel, this morning it says "The data were not collected yet." - not too sure what that means, but, last night when I checked it was reporting something odd. It said Files were using up 390MB, but 1.80GB or so were being used up by 'Mail Accounts'. This is really strange, as there are no mail accounts set up for the domain. The only hint of 'mail' there is, is the catchall set up to forward *@mysite.com to a separate, ISP-hosted email account. Any ideas anybody? I can post more details if you need it. Sorry to be a bit vague but I'm not sure what else I can post. Thanks, Jack

    Read the article

  • maximum number of connections Squid

    - by Isaac
    I have a Squid proxy server that controls all internet traffic for my network. I need a way to stop users from downloading big files (say 50MB) in my network. I banned some famous ports (e.g. torrent) but some downloads are possible by HTTP port. Obviously I cannot ban port 80! A simple solution is limiting maxmimum number of the simultaneous connections for each IP (e.g. 3 connections). It's possible in Squid with this config: acl ACCOUNTSDEPT 192.168.5.0/24 acl limitusercon maxconn 3 http_access deny ACCOUNTSDEPT limitusercon But this solution has really bad impact in web browsing, because any smart browser get different parts of a website by several connections simultaneously to speedup web browsing. But if we have a maximum number of connections, the browsers will fail to get some parts and the website will be shown partially and some parts/images/frames will not be shown. So, can we limit maximum number of persist connections? I think this policy will works: Specify Maximum number of connections that is alive for 10 seconds But Number of simultaneous connections for every IP is unlimited But how can we implement this policy when Squid? With which config? UPDATE: artifex and Tom Newton offered using a bandwidth-limiting approach to fight against downloaders. But bandwidth-limiting in Squid has a shortcoming: It's static and cannot dynamically change. So a person has a limited bandwidth not matter how many people are using internet (maybe nobody!) Also, this solution cannot help to stop people from downloading. They still can download but in a lower speed. But if we find a way to terminate persist connections (or any connection that is alive more than a specific time), downloading big files will be almost impossible (always there is some way!)

    Read the article

  • Ram question in VMware Server 2

    - by ToreTrygg
    Hi, I understand from the VMware Server 2 documentation that VMware Server 2 is capable of running a 64-bit guest OS underneath a 32-bit host OS, as long as the hardware running the box is 64-bit capable. Here's my situation. We currently have an underutilized XEON X3220 Quad Core 64bit Server, running Server 2003, 32-bit and 2gb of RAM (the motherboard is capable of 8gb ram). The server is currently used mainly for file and print services. It is also running Active Directory, Novell eDirectory and Groupwise 6.5. We are planning a micration to Microsoft Exchange, so the Novell eDirectory and Groupwise services will eventually be purged from this box, leaving only Active Directory, File and Print services. Being that this server is underutilized we are hoping to save hardware costs and virtualize our new Exchange investment. My question is this. Will VMware allow access to the "invisible" extra memory that Windows 32-bit won't see. Meaning, if we increase the full amount of system ram to 8gb (yes, I know the 32-bit host OS will only see a maximum of 4gb), will I be able to assign maybe 5gb to the new Server 2008 64-bit OS running Exchange and leave 3gb for the Guest OS (or maybe even a 6, 2 split). The second part of that would be, would it be better to just convert the main OS currently running to an image, convert the machine itself to ESXi and run both OSes as images under ESXi. Downtime for this box is critical, so my preference is most definitly with the first option because it presents very minimal downtime. Doing the second would make downtime quite a few hours to image the machine and then convert the image to a VMware Image.

    Read the article

  • How to move or delete files from a folder containing 2 million files on an NTFS drive?

    - by Beau
    The issue is that any modification to the directory locks up Explorer indefinitely, though Samba access to other directories still works. I've tried moving files locally and over Samba. Even enumerating the directory to get the list of files locks up the computer indefinitely. I tried using Python's win32file.FindFilesIterator to iterate the files but that also hangs. My idea was to move each file to a different directory (in a directory above the directory we're dealing with) based on its timestamp, so that we'd have at most a thousand or so files in each directory... But since I can't even enumerate the files, that's been a non-starter. If I have to give up and just nuke the directory I'm willing to do that, but a standard delete also hangs indefinitely. I have set these two parameters to increase speed and they also did not help the issue: R:\>fsutil behavior query disablelastaccess disablelastaccess = 1 R:\>fsutil behavior query disable8dot3 disable8dot3 = 1 These are all sequential images that would have run into the 'bug' with 8.3 filenames whereby many similarly named files in one directory can take a long time to compute 8.3 filenames. From what I understand this data is stored in the file system even after disable8dot3 is enabled, so it may still be contributing to the problem. Any ideas?

    Read the article

  • Screen Flickering: Hardware or Software?

    - by Wesley
    I have a Samsung N120 netbook (upgraded to 2GB DDR2 RAM) and there has been a screen flickering issue for some time now. However, I have not been able to accurately determine whether it is a software or hardware issue. Here are some of the symptoms: The flicker is white-colored and shows up as vertical lines. Flickering or not, there may be occasionally some random blue patterns (no image distortion) The screen tends to flicker more when the screen is not tilted back all the way. When tilting the screen back and forth, the screen will usually flicker. Some images on the screen may randomly distort without full-on flickering. The screen will flicker only on certain websites, but not on others. A certain part of a webpage may constantly be distorted randomly, even when scrolling. While flickering, the mouse will not move though I'm moving my finger along the touchpad. A connected external monitor does not have any problems. The flickering is completely random and does not seem to follow any CPU/GPU usage trends. Flickering usually gets worse when the screen brightness is turned higher. There will be flickering on battery and while plugged in. Search up "Samsung N120 - Screen Flickering" on YouTube for an idea of what the flickering looks like. However, there is no visible distortions and the flickering seems to stop when the screen has dimmed. Since the problems started, I tried formatting and using Windows 7, then formatted again and went back to Windows XP. The screen was also replaced sometime during this past summer. The uninstallation of the Samsung Battery Manager (on the original install of XP) seemed to reduce the flicker partially, but eventually got worse. So, what could possibly be the problem?

    Read the article

  • Excel fails to open Python-generated CSV files

    - by johnjdc
    I have many Python scripts that output CSV files. It is occasionally convenient to open these files in Excel. After installing OS X Mavericks, Excel no longer opens these files properly: Excel doesn't parse the files and it duplicates the rows of the file until it runs out of memory. Specifically, when Excel attempts to open the file, a prompt appears that reads: "File not loaded completely." Example of code I'm using to generate the CSV files: import csv with open('csv_test.csv', 'wb') as f: writer = csv.writer(f) writer.writerow([1,2,3]) writer.writerow([4,5,6]) Even the simple file generated by the above code fails to load properly in Excel. However, if I open the CSV file in a text editor and copy/paste the text into Excel, parse it with text to columns, and then save as CSV from Excel, then I can reopen the CSV file in Excel without issue. Do I need to pass an additional parameter in my scripts to make Excel parse the CSV files the same way it used to? Or is there some setting I can change in OS X Mavericks or Excel? Thanks.

    Read the article

  • My home box as my own host?

    - by Majid
    Hi all, I have a 512 kb/s DSL service at home. I do not have a static IP but I can get one if I pay some extra to my ISP. Now, if I get the static IP, can I make my home box act as my internet host? What else do I need? Thanks P.S. I know that if at all possible, the site I make available this way might be slow, that is alright, my question is if it is possible at all. Edit: I need this for very small traffic. I am a php developer and for my projects I am often asked to provide a demo. I currently use a free hosting for this purpose but it is down most of the time and support is non-existent. So I thought to set-up my home computer as my test server. With this please note that: I will only occasionally have 'visitors' and that will be one or possibly two visitors at any time. These demos, are to showcase functionality, so no big images are served and page views will normally generate under 100KB of traffic.

    Read the article

  • Looking for "bitmap-vector" image editor

    - by Borek
    I used to use PhotoImpact which is no longer developed so I'm looking for a replacement. What made PhotoImpact great to me was the ability to work in both bitmap and vector modes. What I mean by that: I could have an image or screenshot and easily add arrows, text captions or shapes to it. These shapes were vector objects so I could come back to them later and amend their properties easily. Software I know of: Paint.NET is purely bitmap so please don't recommend it - layers are not enough for my needs Drawing tools in MS Office work pretty much the way I'd like - you can paste an image and then add vector objects on top of it. It just doesn't feel right to have the full-fidelity original images stored as .docx or .pptx (I don't fully trust Word/Powerpoint that they don't compress the image) I'm not sure about GIMP but if it's just "better Paint.NET" (i.e., layers but no vector objects) I'm not interested Photoshop is out of question purely because of its price tag Corel killed PhotoImpact because they already had a competing product (Paint Shop Pro) but AFAIK it lacks vector features. Any tips for PhotoImpact alternatives would be very welcome.

    Read the article

  • Centos Server/MySQL server problem

    - by Jake
    Hello all, I currently run a website we get about 15,000-20,000 hits a day. We currently run a very active forum, that is hosted using Vbulletin software. We have 4.5 Million Posts, 80,000 Threads, with about 11,000 members of which just under a third is active all the time. Now I am running a Intel Xeon Quad Core (2.13Ghz) with 4GB of RAM, Centos 5.5 and running DirectAdmin on the box to manage it. I also run the current stable version of Apache, MySQL, and php. This is the only site that is hosted on this machine. Now during random times of day sometimes when it gets busy the server load can get to like 20, but this can also happen when we only have like 200 users active too. I dont understand what is causing these problems. Sometimes I get pages that can generate in .2 seconds other times it takes like 5-8 seconds. I have customized the my.cnf file and that has not helped out anything, I didnt know where else to turn so if anyone has any suggestions please let me know. Thank You In advance.

    Read the article

  • Browser says "Waiting for www.xyz.com" for a very long time

    - by Phil
    When I load my website (hosted with Ipage). The browser often takes an incredible long time saying "Waiting for www.xyz.com ..." before any elements of the site actually appear. After this "Waiting for" process, the text, images and everything else actually load quite fast. I contacted my host with my tracert result and they said they optimized my website database and increased the memory available to PHP on my account to 64 Mb.They also said they have checked the issue by accessing my website and found that it is loading fine without any slowness. It seems to be a temporary issue. Please try to access your website with different browser and network. I tried different browsers and networks but this "Waiting for" process always takes too long. My website is http://www.surreyextra.com/ . It's Wordpress and BuddyPress. I'm in the UK while Ipage host is placed in the USA, can this potentially be a problem? I have tried a number of optimizations, like minifying my CSS and JS files and use catching but the problem hasn't improved. So is it my host's fault, should I contact them again?

    Read the article

< Previous Page | 583 584 585 586 587 588 589 590 591 592 593 594  | Next Page >