Search Results

Search found 19904 results on 797 pages for 'software enthusiastic'.

Page 685/797 | < Previous Page | 681 682 683 684 685 686 687 688 689 690 691 692  | Next Page >

  • Windows XP corrupts registry every several hours

    - by Ilya Kazakevich
    There is a Dell XPS 400 with Windows Media Center installer. It is installed on RAID (Intel Matrix Storage) which is built-in chipset south bridge. Raid has two 150 Gb WDC drivers connected as mirror. All drivers and updates are installed( sp3 and so on). A week ago PC changed its video mode to 256 colors (like VESA mode) and after several moments I got BSOD: c000021a: 0xc0000005 Doctor watson did not create dump although it is installed as default debugger. After reboot it said that config file is missing or corrupted. So, I boot to recovery console and found that registry file (config) is so small. I've replaced it with one from recovery point and windows booted sucessfully. But after about 3 hrs -- it has crashed again in the same wat! I look in event viewer: is said that Explorer.exe failed to open \global??\DLIAFS. I look in winobj, and found that it is a device. I made "deny from everyone" for this device ACL, and after several hours my windows crashed. I restored registry, boot again and there was no error about DLIAFS. I did full chkdsk and it did not found anything bad. But I found event about error paging to \Harddrive1\D. I do not have pagefile there, but I thought I should check my disk again. Unfortunatelly I cannt use smart tools for RAID, but I downloaded latest software from Intel (it can do the same things like RAID bios can but from windows). It verified my disks, found some errors, fix them, than I rebooted. And it crashed again. I am lost. What (except kernel debugging) could be done here? Thanks

    Read the article

  • Windows XP corrupts registry every several hours

    - by Ilya Kazakevich
    There is a Dell XPS 400 with Windows Media Center installer. It is installed on RAID (Intel Matrix Storage) which is built-in chipset south bridge. Raid has two 150 Gb WDC drivers connected as mirror. All drivers and updates are installed( sp3 and so on). A week ago PC changed its video mode to 256 colors (like VESA mode) and after several moments I got BSOD: c000021a: 0xc0000005 Doctor watson did not create dump although it is installed as default debugger. After reboot it said that config file is missing or corrupted. So, I boot to recovery console and found that registry file (config) is so small. I've replaced it with one from recovery point and windows booted sucessfully. But after about 3 hrs -- it has crashed again in the same wat! I look in event viewer: is said that Explorer.exe failed to open \global??\DLIAFS. I look in winobj, and found that it is a device. I made "deny from everyone" for this device ACL, and after several hours my windows crashed. I restored registry, boot again and there was no error about DLIAFS. I did full chkdsk and it did not found anything bad. But I found event about error paging to \Harddrive1\D. I do not have pagefile there, but I thought I should check my disk again. Unfortunatelly I cannt use smart tools for RAID, but I downloaded latest software from Intel (it can do the same things like RAID bios can but from windows). It verified my disks, found some errors, fix them, than I rebooted. And it crashed again. I am lost. What (except kernel debugging) could be done here? Thanks

    Read the article

  • Hard Drive benchmark values show write very very slow

    - by John
    I recently started to have issues with my laptop being very slow. I ran a hard drive benchmarking tool (by ATTO) that showed that the write speed was very very slow on my boot drive. I ran the same benchmark on my usb drive and it was 650 times faster than my boot drive when it came to writing. Reading is very fast/normal on both. I swapped out an identical drive and ran the same benchmark. This time the drive showed proper write speed. Thinking that I had a hard drive going bad I cloned the old one onto the new one. I managed to clone the problem too. Anyone have any ideas on what in WinXP SP3 might be causing the write issues? I am on a corporate network and we have commercial anti-virus software installed. (AVG I think) I regularly run defraggler and have about 40 gig free on a 100 gig drive. The machine has 4 gigs of memory. Any ideas? TIA J

    Read the article

  • Synchronizing files between Linux servers, through FTP

    - by Daniel Magliola
    I have the following configuration of servers: 1 central linux server, a VPS 8 satellite linux servers, "crappy shared hostings" I have a bunch of files that I need to have in all servers. Right now i'm copying them everywhere manually, but I want to be able to copy them to the central server, and then have a scheduled process that runs every now and then and synchronizes them (only outwardly, no need to try to find "new" files in the satellite servers). There are a couple of catches though: I can't have any custom software in the satellite servers, or do strange command line things that'll auto connect to them and send the files directly. I know this is the way these kinds of things are normally done, but the satellite servers are crappy shared hosting ones where I have absolutely no control over anything. I need to send the files over FTP I also need to have, in my central server, a list of the files that are available in each of the satellite servers, to make sure they are ready before I send traffic to them. If I were to do this manually, the steps would be: get the list of files in a satellite server compare to my own, and send the files that are missing get the list of files again, and store it in my central database. I'd like to know what tools are out there that can alleviate as much of this as possible, first the syncing, and then the "getting the list of files available in the other server". I'm going to be doing everything from PHP, not sure if there are good tools to "use FTP from PHP", which i'm pretty sure i'll have to do for step 3 at least. Thanks in advance for any ideas! Daniel

    Read the article

  • Value of Itanium over x86_64 for Oracle Deployment

    - by Antitribu
    We are looking at a new environment to run our Oracle Database running on SUSE (potentially migrating to RedHat). Our database is approximately 100GB and performs adequately on our current hardware (x86_64) with approximately 6GB of ram allocated to it. We are growing quickly however and will require more performance shortly. Given the cost of Oracle licenses we would like to maximize the value from each license by choosing the most appropriate CPU to run the software on. The questions are: Are there substantial benefits to looking at Itanium hardware, are there any drawbacks? Is there a point where Itanium starts to scale out better? What are the long term support options for Itanium? Given the dominance of x86 would it be safer long term to stick with x86? On average what would be the performance benefit of implementing an Oracle database on Itanium over x86_64? Is this an issue at all or will other factors (IO/RAM) cap out first? If anyone can point me towards some solid documentation on comparisons between the two platforms that provides good case analysis of when to choose which I'm more than happy to accept that as an answer.

    Read the article

  • Different network response for indentical co-located machines

    - by Santosh
    We have a situation as follows: We have a two different virtual machines (VMs) on some remote server farm. The machines are identical in terms of hardware/software(OS) configurations. We have a J2EE application running on JBoss on each of those two machines. These two applications are of different version sav V1 on VM1 and V2 on VM2. We observed some degraded response time for application V2 when accessed via public URL. When we accessed the application through a secured VPN, there is hardly any difference. The bandwidth test (upload/download speed, ping etc) shows that VM1 is responding better when accessed via secured VPN. We concluded that the application does not seem to have performance issue. Because, it that's the case the performance degradation should also be there when access via VPN. So we concluded its the network problem. But since those two identical VMs are on same network we are looking for the reasons for different responses. My question is, given the above situation, what could be reasons for such a behavior ?

    Read the article

  • Simulate SNMP traps to test surveillance

    - by jishi
    I'm trying to use Net-SNMP on Windows to emulate a trap that should trigger an alarm on our surveillance system. This is the setup: Windows 7 client that sends the trap Net-SNMP as software for sending the trap Linux with Adventnet ManageEngine OpManager as NMS (not relevant) This is what I'm trying to accomplish send trap with OID .1.3.6.1.4.1.5089.1.0.1 (according to the MIB I have loaded into my NMS) and just some sort of message into it to see if I can get any alarm in my NMS. I can see that I actually send a trap in my firewall, but I have no idea what it contains. This is my attempt so far: snmptrap.exe -v 2c -c xxxxxxx 192.168.100.65 '' 6 0 .1.3.6.1.4.1.5089.1.0.1 s "123456" However, I can't seem to find any reasonable documentation with examples for snmptrap. Basically, I need to know what: '' <- why do I need this? I can omit it and it will still send a trap 6 <- Enterprise gneric trap, I assume. Is this correct? 0 <- I have no idea, I need some sort of value for this .1.3.6.1.4.1.5089.1.0.1 <- the enterprise specific OID I assume, should this be followed by some more numbers s <- indicates string "123456" <- just a random test-string... This doesn't make much sense to me, and if anyone can shed some light on this I would be very grateful.

    Read the article

  • How to fix massive lag on ZyXEL HomePlug AV powerline adapters?

    - by Tim Abell
    I have 3 ZyXEL Homeplug AV powerline adapters as per the one in the review below. I have two plugged in currently, one into my Be / Thompson wireless router, and one into my desktop pc (box1). every now and then the link indicator on the adapters (the mains link, not the ethernet link) goes nutty, and performance falls off a cliff (see below). http://www.gadgetspeak.com/gadget/article.rhtm/753/479266/ZyXEL_PowerLine_HomePlug_AV_PLA401.html 64 bytes from box1 (192.168.1.101): icmp_seq=1064 ttl=64 time=996 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1065 ttl=64 time=549 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1066 ttl=64 time=6.15 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1067 ttl=64 time=1400 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1068 ttl=64 time=812 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1069 ttl=64 time=11.1 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1070 ttl=64 time=1185 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1071 ttl=64 time=501 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1072 ttl=64 time=1975 ms 64 bytes from box1 (192.168.1.101): icmp_seq=1073 ttl=64 time=970 ms ^C --- box1 ping statistics --- 1074 packets transmitted, 394 received, +487 errors, 63% packet loss, time 1082497ms rtt min/avg/max/mdev = 5.945/598.452/3526.454/639.768 ms, pipe 4 Any idea how to diagnose/fix? I'm on linux so installing the windoze software that came with them is not something I'm terribly keen to do.

    Read the article

  • IIS 7 + Tomcat 7 - how to reach http://localhost:8080/my_app under e.g. http://my_app.local

    - by Sk8erPeter
    In brief: IIS 7 + Apache Tomcat 7 + isapi_redirect.dll: I have a deployed and working Tomcat-application available under http://localhost:8080/my_app. I would like to see the same content under http://my_app.local (and NOT the default Tomcat-site [which you can see below]). How can I do that? Longer explained: I have IIS 7 (7.5.7600.16385) and Apache Tomcat/7.0.22 installed. I deployed an application (let's call it "my_app") in Tomcat, which now can be reached at http://localhost:8080/my_app, works fine. I added a new web site in IIS panel with the path of the Tomcat deployed my_app, which looks like this: "c:\Program Files\Apache Software Foundation\Tomcat 7.0\webapps\my_app" I binded the host name my_app.local. After that, I configured isapi_redirect.dll like this (or that). Now, when I open http://my_app.local, I can see the default Tomcat site (see below). BUT under http://my_app.local I would like to see the same content as under http://localhost:8080/my_app. How can I do that? Thank you very much in advance!! my config files: isapi_redirect.properties (I made a dir junction to c:\tomcat, so this also works :) ) workers.properties uriworkermap.properties rewrites.properties (empty)

    Read the article

  • Magnetic Stripe Reader over Terminal Server has random Uppercase/Lowercase nonsense

    - by Peter Turner
    The Magnetic Stripe Reader that I'm using and testing is just supposed to be sending keystrokes. Unfortunately, it seems to randomly be sending upper case and lower case keystrokes, sometimes substituting % for 5 and ^ for 6 and vice versa. (If you've ever programmed for a magnetic strip reader, you know that's not a good thing.) Is there something in the RDP protocol that causes this? I've got kind of a convoluted system, running XP inside virtualbox on Fedora 11 RDP'ed into a win2k3 server. It works on the XP VM and it doesn't work on the RDP'ed one. What's weirder, is that I'm not even emulating the USB drivers for my Mag Card Reader. Linux is sending keystrokes straight in to windows, and MSTSC on windows XP is sending crap to the Win2k3 server. I'm 99% certain this isn't a problem with the card reader, it has nothing to do with my programming either. (I get the same junk coming into notepad that I get coming into our software [that's why I didn't ask on SO]). And, it works with rdesktop programs other than MSTSC.exe! Needless to say, I'm in need of some RDP Guruship.

    Read the article

  • Why is OpenSSH not using the user specified in ssh_config?

    - by Jordan Evens
    I'm using OpenSSH from a Windows machine to connect to a Linux Mint 9 box. My Windows user name doesn't match the ssh target's user name, so I'm trying to specify the user to use for login using ssh_config. I know OpenSSH can see the ssh_config file since I'm specifying the identify file in it. The section specific to the host in ssh_config is: Host hostname HostName hostname IdentityFile ~/.ssh/id_dsa User username Compression yes If I do ssh username@hostname it works. Trying using ssh_config only gives: F:\>ssh -v hostname OpenSSH_5.6p1, OpenSSL 0.9.8o 01 Jun 2010 debug1: Connecting to hostname [XX.XX.XX.XX] port 22. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug1: identity file /cygdrive/f/progs/OpenSSH/home/.ssh/id_rsa type -1 debug1: identity file /cygdrive/f/progs/OpenSSH/home/.ssh/id_rsa-cert type -1 debug1: identity file /cygdrive/f/progs/OpenSSH/home/.ssh/id_dsa type 2 debug1: identity file /cygdrive/f/progs/OpenSSH/home/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3p1 Debia n-3ubuntu5 debug1: match: OpenSSH_5.3p1 Debian-3ubuntu5 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'hostname' is known and matches the RSA host key. debug1: Found key in /cygdrive/f/progs/OpenSSH/home/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /cygdrive/f/progs/OpenSSH/home/.ssh/id_rsa debug1: Offering DSA public key: /cygdrive/f/progs/OpenSSH/home/.ssh/id_dsa debug1: Authentications that can continue: publickey debug1: No more authentication methods to try. Permission denied (publickey). I was under the impression that (as outlined in this question: How to make ssh log in as the right user?) specifying User username in ssh_config should work. Why isn't OpenSSH using the username specified in ssh_config?

    Read the article

  • Windows 2008 64bit: applications and explorer always hang

    - by Phil Farthing
    I setup a couple of Window 2008 64bit systems about 5 months ago. Initially all seemed well. Now however, for no apparent reason, things are dog slow, apps hang, explorer hangs, just clicking on something can cause a CPU spike of 100%, and often it's explorer that is eating it up. As I have two on identical hardware, and they experience the same problem, it doesn't seem related to addon software. The only thing these have in common is Kaspersky and I've tried disabling/uninstalling to no avail. There are no useful error messages in the event logs. Actually, the system never even reports app hangs. Sometimes, it similar to what I've seen on Windows 7 systems where the screen goes milky and asks if I want to trouble shoot, that's only when get impatient and click happy. The really odd thing, is that it will NOT do this for a few minutes at a time, and then starts up again. Like I will click on the start menu and browse for the Admin Tools, the start menu will hang at some point and I'll have to wait about a minute, then it's OK. The next time I do this, a few seconds later, it's fine. Every click seems to hang the first time around, then be ok the second time if I do the exact same thing. If anyone has any suggestions, please PLEASE let me know! thanks =)

    Read the article

  • IIS permission configuration issue

    - by Dan
    Sorry the title of this question is a little ambiguous but I don't really have any idea where the issue lies - I'm seeking some clarification of the server error logs. Basically, I had a dedicated server running Windows 2003 and Plesk (v8 I think). Last week the server hardware failed and the entire thing had to be rebuilt from scratch. New hardware was put in, new operating system (Win2008), new Plesk installation (v9.5), new software (MSSQL etc) then all data ported over manually from old C and D drives to restore all 30 client sites. It was hell! All has been okay for a couple of days now but about an hour ago POP! Suddenly all sites went down giving a 500 error. Restarting all services eventually brought everything back online, but I'm now living in total fear. It can - and probably will - happen again. The guys on support gave me the following errors from the server log: The Template Persistent Cache initialization failed for Application Pool 'ASP.NET v4.0 Classic' because of the following error: Could not create a Disk Cache Sub-directory for the Application Pool. The data may have additional error codes.. The worker process for application pool 'domain1.com(domain)(2.0)(pool)' encountered an error 'Cannot read configuration file ' trying to read configuration data from file '\\?\C:\inetpub\temp\apppools\domain1.com(domain)(2.0)(pool).config', line number '0'. The data field contains the error code. The worker process for application pool 'PleskControlPanel' encountered an error 'Cannot read configuration file ' trying to read configuration data from file '\\?\C:\inetpub\temp\apppools\PleskControlPanel.config', line number '0'. The data field contains the error code. The support guys are so ambiguous about this and it scares me horribly. Can anyone positively identify the cause of this error which lead to all client website going offline? What can be done to prevent it from happening again? Any pointers would be very much appreciated! Thanks folks...

    Read the article

  • linux Firewall question

    - by bcrawl
    I have few generic questions about firewalls and I thought the community up here could help me out. 1) So I recently installed Ubuntu server barebones. I checked for open ports, none were open which was great. Is that because there was a firewall installed or was it because there were no applications installed? 2) I installed some applications, (Apache, postgres,ssh, Java app and some few). Between these, I ended up opening a few ports (~10). Now I have a list of all the ports I would need open. So, how do I go about protecting them? [Is this the right question to ask? does the process go like this, Install Firewall - Allow Said needed ports - deny rest using IPtables rules] This is going to be open to the internet. Hosting low traffic ecommerce sites. 3) What do you think is the easiest way for me to quasi-secure the server, [low maintenance overhead/simplicity. Any open source "software" which can make my life easier?] 4) Finally, of the said open ports [2], I have 2 ports I need to close because they are telnet ports. Can I close these ports without installing a "firewall" Thanks all for the help and Merry Christmas!!!!!!!

    Read the article

  • How can I wipe my iPod classic and fix any bad sectors on the hard drive without killing it?

    - by Sam Meldrum
    My iPod never finishes syncing and only syncs audio, not pictures or video - any ideas as to how I can fix it? My iPod classic 160GB worked well for a couple of years. I used to sync a lot of photos at full resolution to it, but this recently stopped working after I moved to Windows 7. iTunes is on latest version - 9.1.1.12 iPod software is up to date - 1.1.2 Windows 7 is fully up to date and patched The symptoms are that the iPod will start to sync, all audio (music and podcasts will sync successfully) but the syncing will then just appear to continue - itunes message: Syncing iPod. Do not Disconnect. This sync never completes - I have left it trying for days. I have tried resetting the iPod using the Restore button, whereupon it restarts sync from default options and again will sync audio, but nothing else. I suspect that something has gone wrong on the hard-drive - either a bad sector or some corrupt data. Is there a process I can go through to fix this? E.g. SpinRite or a format? If so how do I go about formatting an iPod and will it be recognised as an iPod after format and work as normal? Any advice on what to try next much appreciated? Update I have eliminated problems with the files, PC or iTunes as they sync fine to other iPods. I have also eliminated the cable by trying different cables which work with other iPods. What I'd really like to know is if there is any way to more fundamentally wipe the iPod safely, attempt to repair any bad sectors on the hard drive and then start from scratch. Anyone ever managed this?

    Read the article

  • Alternatives to using email (in particular, Outlook) as a knowledge store?

    - by Umber Ferrule
    I suspect that, like many people, I use my work email account (accessed via Outlook 2007) to store information. I generally try to group similar things in folders and sub-folders, but with a multitude of folders this gets very unwieldy. In particular, it can be a bind to locate things using Outlook's tree structure. (As an aside: I've yet to come across a good free search add-on for Outlook.) I realise Outlook is not the best place to store all my information and I'd prefer not to. In an ideal world I'd like to be able to organise all of the information stored in Outlook in a MindMap (my software of choice being Freemind) or Wiki. To maintain an email audit-trail, I've considered saving individual emails as files using a MindMap or Wiki to link them. What do people think of this? (I can't say I relish the thought of the exporting process!) Whatever I do is going to involve some pain (i.e. setting up a Wiki/MindMap) or sticking with what Outlook provides currently. Has anyone been in the same position? Has anyone mass-migrated information from Outlook? If so, what was the best way? Any ideas or alternative proposals?

    Read the article

  • Acer LCD says "no signal"

    - by Ken
    I have an Acer 24" LCD (model "AL2423WDR") that's about 3 years old. It worked perfectly for most of its lifetime so far. Recently it started giving problems. When I turn it on, it either says "no signal" on the display, or the power light goes yellow (as if in power-saving mode). This happens with both DVI and VGA (both of which worked fine before), and stranger still, the 4 buttons on the front, for accessing the on-screen menus, don't do anything. I've also tried different computer hardware and software (PC/Mac, Linux/MacOS), but nothing has worked. I've tried power-cycling it (with both the power button and the power switch), and also unplugging it entirely. The nonworking buttons suggest to me an issue with the firmware. I found a place on Acer's website that says I can send it in to have it fixed, at my expense, but I'll avoid that if I can. Is there a way to fully reset it manually? Or is there something else I can try?

    Read the article

  • Digitize video?

    - by kire
    I got some movies on a VCR that i want to move to my computer somehow. (for personal use) I have a video capture card and all hardware required. I'm looking for the software - programs, codecs etc. I like the format that most torrents come in and got some of these i'd like to use as reference for comparison. How can i see what codec, bitrate, etc a movie is using so i can pick this and know that it will work and look good. For AVI-files the bitrate is visible in explorer but it doesn't mention the codec used and i also have a lot of MKV-files that explorer can't handle. All kinds of tips, tricks and other suggestions are welcome. This is completely new to me. How do I avoid that video/audio gets out of sync for example, many movies you download have audio out of sync so i guess this can happen quite easily. The encoding-program has to run on windows and for playback the movies should work on at least VLC for windows.

    Read the article

  • Dynamic subdomain routing

    - by Nader
    Hi everyone, I asked this question over at stackoverflow, but got very few views: http://stackoverflow.com/questions/2284917/route-web-requests-to-different-servers-based-on-subdomain Perhaps it's more applicable to this crowd. Here it is again for convenience: I have a platform where a user can create a new website using a subdomain. There will be thousands of these, eg abc.mydomain.com, def.mydomain.com . Hopefully if we are successful hundreds of thousands. I need to be able to route these domains to a different IPs to point at a particular app server. I have this mapping in a database right now. What are the best practices and recommended technologies here? I see a couple options: Have DNS setup with a wildcard CNAME entry so that all requests go to a single IP where perhaps two machines using heartbeat (for failover) know how to look up the IP in the database and then do an http redirect to the appropriate app server. This seems clunky and slow to me. Run my own DNS server that can be programatically managed such that when a new site is created a DNS entry is added. We also move sites around to different app servers, so I would need to be able to update DNS entries in close to real time. Thoughts anyone? Thanks. Update2: I've setup external wildcard DNS pointing at an HAProxy web server whose job it is to route requests to backend servers. The mapping is stored in our internal PowerDNS server. Question now is how to get the HAProxy server (or another) to use the value of the internal DNS and not some config file or access list? – Update: Based on some suggestions below, it seems like reverse-proxy server(s) is the way to go. As I'll be rebalancing the domain-server mapping, these need to work instantly and the TTL on a DNS solution could be a problem. Any recommendations on software to use considering this domain-IP data is stored in a DB, and I'll need this to be performant?

    Read the article

  • What's the lowest cost, legal, Microsoft server stack you can assemble?

    - by McKAMEY
    Assuming that you have an app infrastructure that generally only requires: ASP.NET MVC / C# / .NET Database or NoSQL data store (must be accessible from C#) Here's the challenge to you server gods: What is the least expensive configuration that will allow you to deploy to production in a way that doesn't break any licensing rules? In what ways does this solution differ from the "standard" Microsoft deployment scenario? Where does this solution's performance break down once the app begins to scale? I'm not concerned about the hardware, only the server software itself. I would love to hear about any solutions you've personally put into production. Especially if they are unique alternatives. For ideas, consider some of the possible variations, a) any Microsoft server solutions where they have lowered the barrier to entry to compete with OSS, or b) any OSS alternatives to Microsoft products which perform at a similar level. An example of a): SQL Server 2008 Express Edition SP1 is a 100% free version of SQL Server which will scale to the needs of many smaller / early stage applications. An example of b): running the Mono Framework on Linux. An example of differing from the "standard" stack: running Mono on Linux will require a completely different server OS familiarity. None of the Windows-based knowledge really transfers. An example of breaking down under scale: SQL Server Express will only scale to 1GB of memory and 4GB of disk storage. After that point, the application will need to move to one of the paid versions of SQL Server.

    Read the article

  • apt-get : Size mismatch

    - by Cédric Girard
    I created a private deb repository to spread a software and it's updates to 600 Ubuntu netbooks. Each time the network is connected, my script try to do a apt-get update. But sometimes (quite often in fact), I have this : Failed to fetch https://myserver/ubuntu/dists/maverick/main/binary-i386/voosicomat.deb Size mismatch The server is an 2.2 Apache, HTTPS only. There is no error on it's logs. Here is the script : apt-get update apt-get dist-upgrade --force-yes --yes Here is the complete output of apt-get Ign https://myserver maverick Release.gpg Ign https://myserver/ubuntu/ maverick/main Translation-en Ign https://myserver maverick Release Ign https://myserver maverick/main i386 Packages/DiffIndex Ign https://myserver maverick/main i386 Packages Ign https://myserver maverick/main i386 Packages Hit https://myserver maverick/main i386 Packages Reading package lists... Reading package lists... Building dependency tree... Reading state information... The following packages will be upgraded: majdb utilitaires voosicomat 3 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 6207kB/6273kB of archives. After this operation, 0B of additional disk space will be used. WARNING: The following packages cannot be authenticated! utilitaires voosicomat majdb Get:1 https://myserver/ubuntu/ maverick/main voosicomat all 2.0.1 [4755kB] Get:2 https://myserver/ubuntu/ maverick/main majdb all 1.0.17 [1452kB] Failed to fetch https://myserver/ubuntu/dists/maverick/main/binary-i386/voosicomat.deb Size mismatch Fetched 7091kB in 21s (324kB/s) E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? Regards Cédric

    Read the article

  • Saving a file in a CSV type in Excel always removes the BOM

    - by rickp
    I've been trying to find a reasonable solution/explanation (unsuccessfully) to find out why Excel defaults to removing the BOM when saving a file to the CSV type. Please forgive me if you find this a duplicate of this question. This handles reading CSV files with non-ASCII encoding, but it doesn't cover saving the file back out (which is where the biggest issue lies). Here is my current situation (which I'm going to gather is common among localized software dealing with Unicode characters and a CSV format): We export data to a CSV format using UTF-16LE, ensuring the BOM is set (0xFFFE). We validate after the file is generated with a Hex editor to ensure it was set correctly. Open the file in Excel (for this example we're exporting Japanese characters) and witness that Excel handles loading the file with the correct encoding. Attempts to save this file will prompt you with a warning message indicating that the file may contain features that may not be compatible with Unicode encoding, but asks if you'd like to save anyway. If you select the Save As dialog, it will immediately ask you to save the file as "Unicode Text" rather than CSV. If you select the "CSV" extension and save the file it removes the BOM (obviously along with all the Japanese characters). Why would this happen? Is there a solution to this problem, or is this a known 'bug'/limitation of Excel? Additionally (as a side issue) it appears that Excel, when loading UTF-16LE encoded CSV files, only uses TAB delimiters. Again, is this another known 'bug'/limitation of Excel?

    Read the article

  • Free/opensource application for charting stock prices?

    - by Homunculus Reticulli
    I am looking for a free or FOSS software application for SIMPLY charting stock prices. I am not interested in any of the other nonsense typically bundled with such packages (technical analysis, back testing, tracking etc, etc). All I want to do is the following: Import file from CSV and plot on the chart Ability to scroll the chart left/right (zoom feature would be nice to) Ability to draw straight line (between 2 points) on the plot Ability to plot the graph for different resolutions (for e.g weekly, monthly - or some other custom resolution that I want) print the displayed graph (I can always use screen capture if printing is too much to ask) Thats all I want to do. I am not interested in anything else. I would have thought I could have found something by now. I would have written my own tool (I still will do that at a later stage), but I am a bit short of time at the moment, so I just want something that will do all of the above. Can anyone recommend a package. Last but not the least, I am running on Linux (and would prefer to do so - BUT if I have to, I can run on ahem - you know, Windows)

    Read the article

  • Google Chrome doesn't want to access Facebook

    - by Pieter van Niekerk
    I have been experiencing a bit of a problem with Chrome over the last couple of days where it doesn't want to access Facebook. When I open Chrome it works fine for a while and then if I were to refresh the page it would give me the Chrome 'This webpage is not available' message. This webpage is not available Google Chrome could not load the webpage because www.facebook.com took too long to respond. The website may be down, or you may be experiencing issues with your Internet connection. Here are some suggestions: Reload this webpage later. Check your Internet connection. Restart any router, modem, or other network devices you may be using. Add Google Chrome as a permitted program in your firewall's or antivirus software's settings. If it is already a permitted program, try deleting it from the list of permitted programs and adding it again. If you use a proxy server, check your proxy settings or contact your network administrator to make sure the proxy server is working. If you don't believe you should be using a proxy server, adjust your proxy settings: Go to the wrench menu Options Under the Hood Change proxy settings... LAN Settings and deselect the "Use a proxy server for your LAN" checkbox. This problem only persists when using the proxy and doesn't occur at all when not on the proxy. I have also tried different browsers (IE9 and Firefox 9.01) but it doesn't occur in any of them. This problem goes away for a while when I restart Chrome, only to happen again a couple of minutes later. I have tried deleting the cookies for Facebook without restarting Chrome, but to no avail. I am using Windows7 with Chrome 17

    Read the article

  • MS licensing of multiple RDP sessions for non-MS products in Windows XP Pro

    - by vgv8
    Question 1) and 2) were moved into separate thread Which Windows remote connections bypass LSA? and what r definitions of login vs. logon session? 3) Do I understand correctly that multiple remote RDP sessions are supported by Windows XP but require additional (or modified) licensing? Which one? Or it is always illegal to run multiple RDP sessions on Windows XP? even through non-MS commercial software? ---------- Update1: I already understood my error - the main questions were about definitions (important to find the common language with others) and the licensing questions were collateral - but it was already answered. I shall try to separate these questions leaving here the questions about RDp licensing and migrating other questions into separate thread ---------- Update2: Trying to "work around" licensing terms is pointless and wasteful of time I never try "working around" and I never ask anything like this, I am not specialist in licensing. My clients/employers provide me with tools and licensing support. They have corporate lawyers, planning/accounting/purchase departments for these issues. The questions that I ask is the matter of scalability and efficiency (saving my and others time) in my developing work. For ex., Just because I need autentication against Windows AD it is time-saving to use ADAM instead of deploying full-fledged AD with DC + servers + whatever else? Nobody is forcing you to use Windows XP I shall not rush into re-installing all my operating systems on all my development machines (at home, at client premises) just because a few guys have a lot of fun downvoting development-related questions in serverfault.com. If I do so, I make a joker from me in the eyes of my clolleagues et al Update: I unmarked this question as answered since it had not even adressed the question, at least mine. Should I understand that Terminal Server PRO, allowing Windows® XP and Windows® Small Business Server 2003 to host multiple remote desktop sessions, is illegal? Related: My answer to question Has windows XP support multiple remote login session (RDP) at a time?

    Read the article

< Previous Page | 681 682 683 684 685 686 687 688 689 690 691 692  | Next Page >