Search Results

Search found 21853 results on 875 pages for 'point'.

Page 611/875 | < Previous Page | 607 608 609 610 611 612 613 614 615 616 617 618  | Next Page >

  • Kubuntu: apt-get install of php5-dev: libtool version mismatch?

    - by pinkgothic
    (Warning, clueless-newbism ahead.) Background info: I'm actually trying to install/upgrade xdebug. sudo pecl install xdebug yields: downloading xdebug-2.0.5.tgz ... Starting to download xdebug-2.0.5.tgz (289,234 bytes) ............................................................done: 289,234 bytes 67 source files, building running: phpize sh: phpize: not found ERROR: `phpize' failed A quick google tells me that phpize is a part of a package called php5-dev, so off I ran to install that. My problem is that using sudo apt-get install php5-dev fails with this output: sudo apt-get install php5-dev Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: php5-dev: Conflicts: libtool (>= 2.2) but 2.2.6a-4 is to be installed E: Broken packages 2.2.6a-4 is greater than 2.2, so I'm not sure why it's hanging itself up at that point. I'm guessing the fact that it's not entirely numeric is throwing apt-get off? I can probably install xdebug manually (though I've never done this before, so picture me with a deer clueless-newb in headlights look here, violently shaking my head and begging for a simpler solution) rather than via pecl / aptitude, but is there a way I can make aptitude install php5-dev despite the bogus 'broken package' claim? Is it even bogus, or am I misreading the error message? Alternatively: Could I install phpize in some other way (e.g. via pear or pecl)?

    Read the article

  • Kubuntu: apt-get install of php5-dev: libtool version mismatch?

    - by pinkgothic
    (Warning, clueless-newbism ahead.) Background info: I'm actually trying to install/upgrade xdebug. sudo pecl install xdebug yields: downloading xdebug-2.0.5.tgz ... Starting to download xdebug-2.0.5.tgz (289,234 bytes) ............................................................done: 289,234 bytes 67 source files, building running: phpize sh: phpize: not found ERROR: `phpize' failed A quick google tells me that phpize is a part of a package called php5-dev, so off I ran to install that. My problem is that using sudo apt-get install php5-dev fails with this output: sudo apt-get install php5-dev Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: php5-dev: Conflicts: libtool (>= 2.2) but 2.2.6a-4 is to be installed E: Broken packages 2.2.6a-4 is greater than 2.2, so I'm not sure why it's hanging itself up at that point. I'm guessing the fact that it's not entirely numeric is throwing apt-get off? I can probably install xdebug manually (though I've never done this before, so picture me with a deer clueless-newb in headlights look here, violently shaking my head and begging for a simpler solution) rather than via pecl / aptitude, but is there a way I can make aptitude install php5-dev despite the bogus 'broken package' claim? Is it even bogus, or am I misreading the error message? Alternatively: Could I install phpize in some other way (e.g. via pear or pecl)?

    Read the article

  • Config Apache HTTP Server for Eclipse

    - by hqt
    Maybe this question is silly but I really don't know how to solve. First, as other server, I want to define new server. So, in Eclipse, I go to: WindowsPreferenceServer: 1) When I add new server, in list, no category for apache HTTP server. Just has apache tomcat. So, I click into download additional server adapter--still don't have in list. 2) So, I search. I point to location I have installed. Good, Eclipse sees that is a HTTP Server. And Eclipse see folder to put project into for me (because I use LAMPP so that folder isn't in Apache folder). But here is my problem. When I want to run a new PHP Project. Right click, run on server. A new dialog appear take me to choose which server to run. And, in list of server, no HTTP Server, So, I don't know how to choose Apache HTTP Server !!! (because Eclipse doesn't see which server that I have defined, eclipse just find adapter first) So, if I want to run this project, I must copy all and paste to apache folder. Too handy !!! Please help me. Thanks :)

    Read the article

  • What does a status of "Backup" mean for Windows 7 local user profiles?

    - by Howiecamp
    Summary: Upon logging on to Windows 7 RTM I get a message that my profile can't be loaded and a temporary user profile is created. I logged off and back on as Administrator. The user profiles dialog shows my user profile with a Type of "Local" and a Status of "Backup" rather than "Local" which it should be. How can I change this to make my user profile accessible? The long story: My PC has a single hard drive partitioned into a C: and a D:. I'd moved my user profile directory (c:\Users) to d:\Users, removed c:\Users and then used mklink.exe to create a directory symbolic link c:\Users -- d:\Users. Worked like a charm since I did it. Today, I make a System Restore Point for drives C: and D:. Next, I dismounted D: and used the Disk Management tool to remove the "D:" drive letter from the D volume. (My plan was to reboot and then redirect the symbolic link.) Upon reboot, I got the user profile error described above. Finally, I restored the System Restore Points that I'd created for both drives and then rebooted again. Same issue.

    Read the article

  • Help setting up a secondary authoritative DNS server.

    - by GLB03
    We have three Authoritative DNS servers and three recursive/caching DNS servers on my campus. Authoritative servers DNS1- Windows 2003 DNS2- Old Red Hat ----- Replacing w/ newer version DNS3- Windows 2008 (I installed) Caching and Recursive resolvers servers Server1- Windows 2003 Server2- CentOS 5.2 (I installed) Server3- CentOS 5.3 (I installed) I am replacing DNS2 with a newer Red Hat version, but have no documentation on how it was implemented. I have setup caching and windows authoritative servers, but not a linux secondary authoritative server. I have a perl script from the original server that pulls data from our DNS1 server. We use DJBDNS and TinyDNS on our linux servers. Our Network Engineer says the DNS2 server I am replacing is an authoritative server that doesn't need to be caching, but the only instructions I see is for an Authoritative server that does caching as well. Can someone point me in the right directions. I thought I was on the right track with using these instructions but when I query my new dns server I get "No response from server", I have temporarily disabled iptables to eliminate it from being an issue. ps -aux | grep dns avahi 3493 0.0 0.2 2600 1272 ? Ss Apr24 0:05 avahi-daemon: running [newdns2.local] root 5254 0.0 0.1 3920 680 pts/0 R+ 09:56 0:00 grep dns root 6451 0.0 0.0 1528 308 ? S Apr29 0:00 supervise tinydns dnslog 6454 0.0 0.0 1540 308 ? S Apr29 0:00 multilog t ./main tinydns 9269 0.0 0.0 1652 308 ? S Apr29 0:00 /usr/local/bin/tinydns

    Read the article

  • Win8/7/XP print spooler not getting along with Zebra ZT230 via WIFI

    - by Jonathan M
    I have a graphics-intensive 4"x6" label I'm printing to the ZT230. I'm printing multiple (10) copies. When connected via USB, all goes well. However, when connected via wifi, I only get 2 of the labels. A wireshark capture shows that at some point in the process my computer (presumably my windows spooler) is sending a reset packet, which, I believe, would pretty much kill the print job. I'm getting the same results on Win8, Win7 and WinXP. The print job was originally generated on Zebra's ZebraDesigner2 software. For easier diagnosis, I captured it to a .prn file. The .prn file can be found here: https://drive.google.com/file/d/0BwxF_9SAkKzLLTF5bUJVT0lESUU/edit?usp=sharing And the wireshark capture file can be found here: https://drive.google.com/file/d/0BwxF_9SAkKzLTGpSS0ktZW1xV28/edit?usp=sharing And the printer configuration listing: https://docs.google.com/document/d/1zh1Tw4D4yNa2uljOIL1kO2z8se9HK859irpUEwyxlyY/edit?usp=sharing I've started a discussion with Zebra Tech Support, and they're working on it, but I thought I'd toss it out here for more ideas since we're getting kind of stumped. Any ideas why this may be happening?

    Read the article

  • How to re-join an AD2003 domain with Samba after deleting the machine account?

    - by Guss
    During some troubleshooting I deleted the machine account for a Linux server running samba from our AD 2003 domain. We are using Kerberos for authentication, and after I deleted the machine account I tried to join the domain again using net ads join -U Administrator But I keep getting Kerberos errors like these: [2009/08/18 16:14:36, 0] libads/kerberos.c:ads_kinit_password(228) kerberos_kinit_password [email protected] failed: Client not found in Kerberos database Failed to join domain: Improperly formed account name It appears as if samba remembers that it once had an account with the AD and keeps trying to reconnect to it, but I want to create a new account from scratch. I tried to delete all the .tdb files I could find as well as everything under /var/cache/samba but to no avail - it still behaves the same. I also tried to create the machine account on the AD side, but then I get a similar error when I try to join, about failure to authenticate with the machine account - it looks like samba tries the previous machine account password and I don't know how to reset it, or even if I could figure out what samba uses - how to set it in the AD. Any help would be greatly appreciated, as at this point the only thing I can think about is to reformat and reinstall the machine, and I would really REALLY love to not do that. Thanks in advance.

    Read the article

  • PC only boots from Linux-based media and won't boot from DOS-based media

    - by xolstice
    I have this problem where the PC only seems to boot from a floppy disk or CD if it was created as a Linux-based bootable media. If it was created as a DOS-based bootable media the system just freezes at the starting point of the boot process. I originally asked this under question 139515 for CD booting only, and based on the given answers, I was under the impression the problem was with the CD-ROM drive; however, I have since installed a newly purchased CD-ROM drive and the same freezing occurs. This then made me try the DOS bootable floppy disk approach and I was quite surprised that it exhibited the same freezing problem. I then tried try a Linux bootable floppy and everything booted from it without any issues. As I mentioned in my original question, the PC was booting just fine from the DOS-based bootable CD, and then it suddenly decides to pull this freezing stunt. I can't remember if I changed anything in the BIOS settings that may I have caused the problem, but I am wondering if that could be the case - it is currently using the Award Module BIOS v4.60PGMA. Can anyone help?

    Read the article

  • What to do when launchpad is down?

    - by Jon
    As I am writing this (Friday, November 8, 2013 at 9:59:18 PM EST) launchpad is down. Apparently there is a power failure (https://twitter.com/launchpadstatus/status/398980619880775680). I tried running sudo apt-get update on my Ubuntu install. However, I simply get stuck on this: Ign http://ppa.launchpad.net precise InRelease 100% [Waiting for headers] Being a Ubuntu newbie, I tried to point my sources.list file to a different source. I backed up the original sources.list and then deleted the entire file to start afresh. I then added the following lines to it: deb http://mirror.anl.gov/pub/ubuntu/ precise main deb-src http://mirror.anl.gov/pub/ubuntu/ precise main I figured that since I have a different mirror, there would be no problem updating. I was wrong. I get stuck at the same place. I have several questions: Why do I need to hit launchpad? I do not reference it in my sources.list file at all. Is this something where the mirror redirects me to launchpad? Is there a good article out there that I can read on how exactly this whole apt-get update thing works that will help me understand why it is hitting launchpad? Is there any way to get my Ubuntu to update while launchpad is down? Isn't there any redundancy for the launchpad servers?

    Read the article

  • What's up with stat on Macos/Darwin? Or filesystems without names...

    - by Charles Stewart
    In response to a question I asked on SO, Give the mount point of a path, one respondant suggested using stat to get the device name associated with the volume of a given path. This works nicely on Linux, but gives crazy results on Macos 10.4. For my system, df and mount give: cas cas$ df Filesystem 512-blocks Used Avail Capacity Mounted on /dev/disk0s3 58342896 49924456 7906440 86% / devfs 194 194 0 100% /dev fdesc 2 2 0 100% /dev 1024 1024 0 100% /.vol automount -nsl [166] 0 0 0 100% /Network automount -fstab [170] 0 0 0 100% /automount/Servers automount -static [170] 0 0 0 100% /automount/static /dev/disk2s1 163577856 23225520 140352336 14% /Volumes/Snapshot /dev/disk2s2 409404102 5745938 383187960 1% /Volumes/Sparse cas cas$ mount /dev/disk0s3 on / (local, journaled) devfs on /dev (local) fdesc on /dev (union) on /.vol automount -nsl [166] on /Network (automounted) automount -fstab [170] on /automount/Servers (automounted) automount -static [170] on /automount/static (automounted) /dev/disk2s1 on /Volumes/Snapshot (local, nodev, nosuid, journaled) /dev/disk2s2 on /Volumes/Sparse (asynchronous, local, nodev, nosuid) Trying to get the devices from the mount points, though: cas cas$ df | grep -e/ | awk '{print $NF}' | while read line; do echo $line $(stat -f"%Sdr" $line); done / disk0s3r /dev ???r /dev ???r /.vol ???r /Network ???r /automount/Servers ???r /automount/static ???r /Volumes/Snapshot disk2s1r /Volumes/Sparse disk2s2r Here, I'm feeding each of the mount points scraped from df to stat, outputing the results of the "%Sdr" format string, which is supposed to be the device name: Cf. stat(1) man page: The special output specifier S may be used to indicate that the output, if applicable, should be in string format. May be used in combination with: ... dr Display actual device name. What's going on? Is it a bug in stat, or some Darwin VFS weirdness? Postscript Per Andrew McGregor, try passing "%Sd" to stat for more weirdness. It lists some apparently arbitrary subset of files from CWD...

    Read the article

  • RabbitMQ message consumers stop consuming messages

    - by Bruno Thomas
    Hi server fault, Our team is in a spike sprint to choose between ActiveMQ or RabbitMQ. We made 2 little producer/consumer spikes sending an object message with an array of 16 strings, a timestamp, and 2 integers. The spikes are ok on our devs machines (messages are well consumed). Then came the benchs. We first noticed that somtimes, on our machines, when we were sending a lot of messages the consumer was sometimes hanging. It was there, but the messsages were accumulating in the queue. When we went on the bench plateform : cluster of 2 rabbitmq machines 4 cores/3.2Ghz, 4Gb RAM, load balanced by a VIP one to 6 consumers running on the rabbitmq machines, saving the messages in a mysql DB (same type of machine for the DB) 12 producers running on 12 AS machines (tomcat), attacked with jmeter running on another machine. The load is about 600 to 700 http request per second, on the servlets that produces the same load of RabbitMQ messages. We noticed that sometimes, consumers hang (well, they are not blocked, but they dont consume messages anymore). We can see that because each consumer save around 100 msg/sec in database, so when one is stopping consumming, the overall messages saved per seconds in DB fall down with the same ratio (if let say 3 consumers stop, we fall around 600 msg/sec to 300 msg/sec). During that time, the producers are ok, and still produce at the jmeter rate (around 600 msg/sec). The messages are in the queues and taken by the consumers still "alive". We load all the servlets with the producers first, then launch all the consumers one by one, checking if the connexions are ok, then run jmeter. We are sending messages to one direct exchange. All consumers are listening to one persistent queue bounded to the exchange. That point is major for our choice. Have you seen this with rabbitmq, do you have an idea of what is going on ? Thank you for your answers.

    Read the article

  • Make a snapshot of a live mySQL database with myISAM & innoDB tables without locking

    - by Artem
    We have a live database in production where we are running out of space on the server. So I would like to transfer to a new server without any downtime (or as little downtime as possible). In general, I would also like to have a hot failover copy of the database available. I would like to use replication to get all of the data copied to the new machine, and then at some point flip a switch and have that new machine become the master (normal failover scenario). My problem is that I am not sure how to initialize replication without locking the db to make the initial snapshot I will use? Is there any way to do this? I know I could do it using single-transaction if I was using innoDB, but very unfortunately we have some myISAM tables in there (in fact the largest 150GB table is myISAM and I want to switch it to InnoDB but I can't do it until I have more space & a hot copy to switch to). Any ideas? Is there some way to make such a snapshot? Or is there alternatively a way to get replication to "catch up" without an snapshot for initialization?

    Read the article

  • Windows 7 Blank Screen on Boot / Login

    - by Greg
    I have a new system that's having a few problems... sometimes (seems to be when the PC is cold, i.e. has been switched off for a while, though that could be my imagination) I get a blank blue screen when I boot up. The system boots normally and auto-logs-in. The desktop loads and I'm even able to launch applications, but then everything disappears and the screen goes to the default windows desktop blue colour (not the desktop image, just a plain blue with no mouse cursor). At this point the machine completely locks up - I'm unable to even toggle Num Lock and have to hold in the power button for 5 seconds to kill it. Interestingly if I manage to launch some applications before it goes blank, they will usually crash... sometimes explorer.exe will crash too. When I reboot, the system is fine and stable. I've installed the latest graphics drivers and run memtest86+ for 6 passes (and counting) with no errors. The system specs are: CPU: Intel I7 2.66 @ 3.4GHz RAM: 6GB (3 * 2GB DDR3) HDD: 128GB Crucial M225 SSD Motherboard: Gigabyte EX58-UD3R Gfx: ATI Radeon Sapphire 5870 1GB Note: There are a few similar questions but I haven't found one that matches my symptoms

    Read the article

  • Defeating the RAID5 write hole with ZFS (but not RAID-Z) [closed]

    - by Michael Shick
    I'm setting up a long-term storage system for keeping personal backups and archives. I plan to have RAID5 starting with a relatively small array and adding devices over time to expand storage. I may also want to convert to RAID6 down the road when the array gets large. Linux md is a perfect fit for this use case since it allows both of the changes I want on a live array and performance isn't at all important. Low cost is also great. Now, I also want to defend against file corruption, so it looked like a RAID-Z1 would be a good fit, but evidently I would only be able to add additional RAID5 (RAID-Z1) sets at a time rather than individual drives. I want to be able to add drives one at a time, and I don't want to have to give up another device for parity with every expansion. So at this point, it looks like I'll be using a plain ZFS filesystem on top of an md RAID5 array. That brings me to my primary question: Will ZFS be able to correct or at least detect corruption resulting from the RAID5 write hole? Additionally, any other caveats or advice for such a set up is welcome. I'll probably be using Debian, but I'll definitely be using Linux since I'm familiar with it, so that means only as new a version of ZFS as is available for Linux (via ZFS-FUSE or so).

    Read the article

  • Installing Tomcat on CentOS 5

    - by andybaird
    Disclaimer: I am not a server admin, I am a windows user that has lead a life of sinful installation wizards and drag and drop I'm attempting to install Tomcat on CentOS 5 hosted by a MediaTemple dedicated virtual server. I basically followed this guide: Installed jpackage and configured the yum.repo.d jpackage file to set enabled=1 Used yum to install java (yum install java) Downloaded the binary distribution of tomcat with "wget http://archive.apache.org/dist/tomcat/tomcat-6/v6.0.14/bin/apache-tomcat-6.0.14.tar.gz" set JAVA_HOME to point at the jdk location I found with "export JAVA_HOME=/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0/" I gunzip/untar the Tomcat files and run ./startup.sh to start the Tomcat server. That is supposed to put the Tomcat server at myserver.com:8080 - however, I just get a could not contact host error when I try to browse to it (or when I try 'curl localhost:8080' from SSH) After I type ./startup.sh, here is the console output: [root@myserver bin]# ./startup.sh Using CATALINA_BASE: /root/apache-tomcat-6.0.14 Using CATALINA_HOME: /root/apache-tomcat-6.0.14 Using CATALINA_TMPDIR: /root/apache-tomcat-6.0.14/temp Using JRE_HOME: /usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0/ [root@myserver bin]# Is there a step I have missed here? Edit: I've now discovered by looking at the log the following error is occuring: Error occurred during initialization of VM Could not reserve enough space for object heap

    Read the article

  • Multiple Internet connections, multiple networks and split access in Linux

    - by Swapneel Patnekar
    I am having trouble setting up multiple internet connections for split access in Linux. We have 3 internet connections from 3 different ISP's. We want to configure our Linux gateway machine such that our three internal networks 10.2.1.0/24, 192.168.20.0/24 & 192.168.2.0/24 use ISP1, ISP2 and ISP3 respectively in a split access manner. Outlined below is the layout/settings, Interfaces of the Linux Gateway connected to Routers: eth0: 10.1.1.2<---------->10.1.1.1(Internal Interface of ADSL Router)[ISP1] eth1: 192.168.15.2<------>192.168.15.1(Internal Interface of 3G Router)[ISP2] eth3: 192.168.1.2<------->192.168.1.1(Internal Interface of ADSL Router)[ISP3] Kindly note that none of the interfaces in the Linux gateway has a public static IP address. Routers of ISP1 and ISP2 get assigned a dynamic public IP address when connected to the Internet, router of ISP3 has been assigned a public static IP address. Interface of Linux gateway connected to a switch, eth4: 10.2.1.1(LAN Interface for ISP1) eth4:0 192.168.20.1(LAN interface for ISP2) eth4:1 192.168.2.1(LAN Interface for ISP3) eth4:0 & eth4:1 are virtual interfaces with eth4 being the interface connected physically. Based on http://linux-ip.net/html/adv-multi-internet.html I've set the following routes, ip route flush table 4 ip route show table main | grep -Ev ^default | while read ROUTE ; do ip route add table 4 $ROUTE done ip route add table 4 default via 192.168.15.1 ip rule add fwmark 4 table 4 ip route flush cache Additionally, using the following iptables rules to mark & route packets as per the guide mentioned above : http://pastebin.com/KzWHFGJA At this point, computers from 192.168.2.0/24 network are successfully able to reach the Internet through ISP3. 192.168.20.0/24 and 10.2.1.0/24 are unable to access the Internet through ISP1 and ISP2 respectively. Any inputs will be much appreciated !

    Read the article

  • Forward Apache to Django dev server

    - by Alex Jillard
    I'm trying to get apache to forward all requests on port 80 to 127.0.0.1:8000, which is where the django dev server runs. I think I have it forwarding properly, but there must be an issue with 127.0.0.1:8000 not being run by apache? I'm running the django dev server in an ubuntu vmware instance, and I'd other people in the office to see the apps in development without having to promote anything to our actual dev/staging servers. Right now the virtual machine picks up an IP for itself, and when I point a browser to that url with the defualt apache config, I get the default apache page. I've since changed the httpd.conf file to the following to try and get it to forward the requests to the django dev server: ServerName localhost <Proxy *> Order deny,allow Allow from all </Proxy> <VirtualHost *> ServerName localhost ServerAdmin [email protected] ProxyRequests off ProxyPass * http://127.0.0.1:8000 </VirtualHost> All I get are 404s with this, and in error.log I get the following (192.168.1.101 is the IP of my computer 192.168.1.142 is the IP of the virtual machine): [Mon Mar 08 08:42:30 2010] [error] [client 192.168.1.101] File does not exist: /htdocs

    Read the article

  • Best practice for Exchange 2010 HA topology considering 6 x Exchange licenses and TMG 2010

    - by MadBoy
    What would be best topology considering that: 6 x Exchange 2010 Standard Licenses 2 x Separate locations that are supposed to support redundancy in case of link problems 4 x Forefront TMG 2010 with Forefront Security and Forefront Protection/Security Multiple locations worldwide using those Exchange. Most locations will be connected with VPN Tunnel (the ones hosting Exchange for sure). I was thinking something like this: Location MAIN (about 70-100 people): 2x TMG 2010 in NLB 1x Exchange 2010 CAS/HUB Role 2x Exchange 2010 Mailbox Role (Active + Passive) Location SUPPORT (about 20 people): 2x TMG 2010 in NLB 1x Exchange 2010 CAS/HUB Role 2x Exchange 2010 Mailbox Role (Active + Passive) Management wants to make sure that in case of problems in main location (power failure, link loss etc) second location can support all traffic from around the world and vice-versa. We have 6-7 locations and more comming up (not big ones but like 10+ people per each location). I do know that CAS/HUB is single point of failure (and no NLB), but i simply lack more licenses to do some redundancy on that. What do you think about this approach? What would be better approach according to you?

    Read the article

  • Upgrading Visio 2000 to Visio 2007

    - by dirtside
    I have Microsoft Visio 2000 SR 1, and recently purchased Microsoft Office Visio Standard 2007 with the understanding (supported by the product info and some other research) that I'd be able to upgrade. However, when I install 2007, it tells me it can't find a previous install of Visio, but... it's right there! Here's the exact message: "Setup can't find a version of Microsoft Office on your computer. If Office is installed on a disk or network share, click the browse button to select the appropriate disk or share... (etc.)" No matter which directory or drive I pick (various Office installs, the old Visio install, various subdirectories) it gives the following message: "The path you have chosen does not point at a qualifying upgradeable product. Click 'Retry' to try again or 'Cancel' to quit setup." Any ideas? This is a legit copy of Visio 2007 (purchased from Amazon) and the copy of Visio 2000 is legit as well. I'm not sure what exactly the installer is looking for that it would consider a "qualifying upgradeable product". A specific file?

    Read the article

  • Virtual PC duplication process

    - by Toddintr
    This is the process I use for duplicating a Virtual PC (on Windows 7): 1 - Create a new VPC. 2 - Install Windows 7 on the new VPC. 3 - Configure the new Windows 7 installation (install Windows updates, install applications, etc). 4 - Run Sysprep on the new VPC. 5 - Shut down the new VPC. 6 - Make a copy of the new VPC's VHD file. 7 - Create a new VPC, specify "use existing VHD file" in the wizard and provide the name of the copied VHD file. Above works fine but there is one point that threw me off: During the OOBE for the duplicated VPC, when asked for a user name, I had to specify a different user name than the one I had specified for the base VPC. This makes sense because the copied VPC already has that user name. But what I did not understand is why I was asked for a new user name at all? Is it because it is part of the OOBE process and when the OOBE was designed by Microsoft, they did not think of the fact that base OS images could be copied? Thanks - -Todd

    Read the article

  • Is this a File Header / Magic Number?

    - by Hammer Bro.
    I've got 120,000 files (way more, actually; this is just an arbitrary subset) of an unknown type. Linux file does not identify them (not that they're necessarily Linux files), nor do any other methods I've tried. There are only two hints about them that I currently have. One is that I suspect some compression is employed -- I have metadata that claims the file sizes are always some amount larger than what I observe. The other is that in 100,000 of these files, the first 16 bytes are always: ff ee ee dd 00 00 00 00 01 00 00 00 00 00 00 00 That really looks like a file header/magic number to me, but I just can't place it. Does anyone know what kind of files this would indicate? Alternatively, can anyone convince me that these suspiciously common bytes certainly do not indicate a specific file type? UPDATE I don't know the exact reverse-engineering details, but most of the files in our case are zips after the first 29(? or so) bytes are ignored. So in practice the problem is solved (we know how to process the files) but in theory the question is still unanswered -- I don't know which application routinely prepends about 29 bytes to its zips. [I'm not sure if I should leave the question open or not at this point.]

    Read the article

  • Problems with X11GraphicsDevice on Suse 11

    - by Daniel
    Hi, On servers running Suse 11 I'm experiencing hangups in sun.awt.X11GraphicsDevice.getDoubleBufferVisuals(Native Method) when connecting via Citrix (and setting DISPLAY to localhost:11.0). Running exactly the same code in exactly the same environment, excepth through Exceed (with DISPLAY set to my workstation's IP) it runs like clockwork. The error is not intermittent, it happens every time Reinstalling the OS does not help Can not reproduce it on Suse 10 This is what the main thread stack looks like: [junit] "main" prio=10 tid=0x0000000040112000 nid=0x6acc runnable [0x00002b9f909ae000] [junit] java.lang.Thread.State: RUNNABLE [junit] at sun.awt.X11GraphicsDevice.getDoubleBufferVisuals(Native Method) [junit] at sun.awt.X11GraphicsDevice.makeDefaultConfiguration(X11GraphicsDevice.java:208) [junit] at sun.awt.X11GraphicsDevice.getDefaultConfiguration(X11GraphicsDevice.java:182) [junit] - locked <0x00002b9fed6b8e70 (a java.lang.Object) [junit] at sun.awt.X11.XToolkit.(XToolkit.java:92) [junit] at java.lang.Class.forName0(Native Method) [junit] at java.lang.Class.forName(Class.java:169) [junit] at java.awt.Toolkit$2.run(Toolkit.java:834) [junit] at java.security.AccessController.doPrivileged(Native Method) [junit] at java.awt.Toolkit.getDefaultToolkit(Toolkit.java:826) [junit] - locked <0x00002b9f94b8ada0 (a java.lang.Class for java.awt.Toolkit) [junit] at java.awt.Toolkit.getEventQueue(Toolkit.java:1676) [junit] at java.awt.EventQueue.invokeLater(EventQueue.java:954) [junit] at javax.swing.SwingUtilities.invokeLater(SwingUtilities.java:1264) ... Has anyone experienced something similar? Could this be a problem in Suse 11's display handling? I'm thankful for any input at this point - I'm fresh out of ideas :)

    Read the article

  • Backup software for incremental swapped-out drives?

    - by user13743
    We're using Acronis Home 11 to backup our main Windows machine at the office. We have a set of portable hard drives that we swap out each week, for redundancy. We have incremental sets ( a new diff of the entire series each night) building on each drive. However, from time to time, Acronis gets confused and sometimes makes a new full backup. This eats up a lot of drive on the disks. Also, I have to trick the Acronis script each time I swap out a drive and point it to the new incremental backup set. Finally, if a drive gets full, there's no way to partition the backup set on a drive. I found this out the hard way, and now one drive is full with one backup set. So now on the other drive, I have three folders of backup sets. When one starts to get full, I delete the oldest one and start a new set. That way one single drive never gets filled up with one single backup set. I'm looking for a backup software that can backup Windows in incremental sets, and doesn't get tripped up with swapped out drives. Is there a better solution?

    Read the article

  • Windows 7 backup keeps trying to backup non-existent file and folders

    - by Ayusman
    My Windows 7 system backup keeps trying to back up 2 non existent file and one folder. I have double checked that these files do not exist. How does windows 7 try to backup that does not exist and then complain and fails the backup? Here is the messages: Backup encountered a problem while backing up file D:\Non library songs\Klub Arabia. Error:(The system cannot find the file specified. (0x80070002)) Backup encountered a problem while backing up file D:\Non library songs\Klub Arabia 2. Error:(The system cannot find the file specified. (0x80070002)) Backup encountered a problem while backing up file D:\Data\FRIENDS. Error:(The system cannot find the file specified. (0x80070002)) These file/folders may have been there at some point but have since been moved. Any idea how to solve this? Does this mean all other content has been backed up successfully? I have windows 7 professional 64bit and I am backing up my Win7 machine to an external hard drive. Ayusman

    Read the article

  • A fatal exception 0E occured at 0028:xxxxxx in VxD IOS(01)

    - by winlin
    I get a blue screen of death in my windows 98 machine every time I boot it. I can't reach to my desktop. The error is like this: A fatal exception 0E occured at 0028:C003CC2F in VxD IOS(01) + 0000156B This was called from 0028:C0082E60 in VxD VKD(01) + 000001D0 I have to then give it a three finger salute to restart the system. There is no other way to shut down the system at this point except pressing the CPU power button. What could be the problem? My windows system.ini is: [boot] oemfonts.fon=vgaoem.fon shell=Explorer.exe system.drv=system.drv drivers=mmsystem.dll power.drv user.exe=user.exe gdi.exe=gdi.exe sound.drv=mmsound.drv dibeng.drv=dibeng.dll comm.drv=comm.drv mouse.drv=mouse.drv keyboard.drv=keyboard.drv *DisplayFallback=0 fonts.fon=vgasys.fon fixedfon.fon=vgafix.fon 386Grabber=vgafull.3gr display.drv=pnpdrvr.drv [keyboard] keyboard.dll= oemansi.bin= subtype= type=4 [boot.description] system.drv=Standard PC mouse.drv=Standard mouse keyboard.typ=Standard 101/102-Key or Microsoft Natural Keyboard aspect=100,96,96 display.drv=Standard PCI Graphics Adapter (VGA) [386Enh] ;device=tddebug.386 ;device=D:\TC\TASM\BIN\WINDPMI.386 ebios=*ebios woafont=dosapp.fon mouse=*vmouse, msmouse.vxd device=*dynapage device=*vcd device=*vpd device=*int13 keyboard=*vkd display=*vdd,*vflatd ConservativeSwapfileUsage=0 Paging=on [NonWindowsApp] TTInitialSizes=4 5 6 7 8 9 10 11 12 13 14 15 16 18 20 22 [power.drv] [drivers] wavemapper=*.drv MSACM.imaadpcm=*.acm ;msvideo.STV680=STV680sg.drv midi=mmsystem.dll wave=mmsystem.dll MSACM.msadpcm=*.acm [iccvid.drv] [mciseq.drv] [mci] cdaudio=mcicda.drv sequencer=mciseq.drv waveaudio=mciwave.drv avivideo=mciavi.drv videodisc=mcipionr.drv vcr=mcivisca.drv MPEGVideo=mciqtz.drv MPEGVideo2=mciqtz.drv [vcache] [MSNP32] [DISPLAY] BusThrottle=1 [network] SSID=1438661605 [vicax] msacm711=74603 msacm811=148933 msacm911=42405 [Sessew] VideoManufacturer=Standard VGA VideoBoard=Standard Display Adapter (VGA) MouseType=0 VidType=0 Mono=0 Ddraw=1 [drivers32] msacm.lhacm=lhacm.acm VIDC.IV50=ir50_32.dll msacm.iac2=C:\WINDOWS\SYSTEM\IAC25_32.AX VIDC.YUY2=msyuv.dll VIDC.UYVY=msyuv.dll VIDC.YVYU=msyuv.dll msacm.msaudio1=msaud32.acm msacm.vorbis=vorbis.acm msacm.l3acm=C:\WINDOWS\SYSTEM\L3CODECA.ACM msacm.sl_anet=sl_anet.acm VIDC.TSCC=tsccvid.dll VIDC.IV41=IR41_32.AX vidc.mpg4=mpg4c32.dll vidc.mp43=mpg4c32.dll msacm.voxacm160=vct3216.acm MSACM.msadpcm=msadp32.acm [TTFontDimenCache] 0 4=2 4 0 5=3 5 0 6=4 6 0 7=4 7 0 8=5 8 0 9=5 9 0 10=6 10 0 11=7 11 0 12=7 12 0 13=8 13 0 14=8 14 0 15=9 15 0 16=10 16 0 18=11 18 0 20=12 20 0 22=13 22

    Read the article

< Previous Page | 607 608 609 610 611 612 613 614 615 616 617 618  | Next Page >