Search Results

Search found 18940 results on 758 pages for 'existing installation'.

Page 264/758 | < Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >

  • How to correctly deploy Adobe Reader 9.1

    - by Ben Gillam
    Hi I have recently tried to deploy Adobe Reader 9.1 onto our network here. (SBS 2003 server and XP Workstations) I followed the instructions for the extraction of the installer and .msi and then creating a .mst transform file to set custom options. (Suppress EULA, dont create desktop icon etc) I then added the package to my deployment GPO applied the relevant .mst file and preceded to deploy accross the network. The software package is computer assigned to be installed prior to logon, to avoid user permissions issues. The package deploys correctly to computers and will run perfectly fine if you run from a shortcut, however when trying to view a pdf from within a web browser it fails with the following message. "The adobe acrobat/reader that is running can not be used to view PDF files in a web browser. Adobe Acrobat/Reader version 8 or 9 is required. Please exit and try again" I have found many pages on google refering to this problem, but none appear to be in relation the problems I have found. http :// kb2.adobe.com/cps/405/kb405461.html These fixes recommend correcting a registry entry (which i should mention is missing after the deployed installation. However this does not work. Switching off display in a browser - Seems to defeat the object of fixing the problem Removing old versions - There arent any. Trying with a different user - This affects all users of all privalige levels on all computers. On my workstation I uninstalled Acrobat Reader 9.1 then reinstalled manually using the same installation source files and it works fine. has anyone sucsessfully deployed AR9.1 on their domain and if so how? For the time being I have downloaded the older 8.1.3 release and deployed this in the same way which works fine, but would like to be using the up to date version. Thanks

    Read the article

  • use ssh tunnel with phpmyadmin

    - by JohnMerlino
    I been using ssh tunnel to bypass firewall of remote mysql server. On my Ubuntu 12.04 installation, it works via the terminal and it works when using a program called mysql workbench. However, that program freezes often and I want to try phpmyadmin as an alternative. However, I cannot connect to remote server using ssh tunnel on phpmyadmin, albeit I can connect locally. These are the steps I've tried: 1) Open a tunnel, listening on localhost:3307 and forwarding everything to xxx.xxx.xxx.xxx:3306 (used 3307 because MySQL on my local machine uses the default port 3306): ssh -L 3307:localhost:3306 [email protected] So now I have the port for tunnel open and I have my local mysql installation default port: $ netstat -tln Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:3307 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN ... 2) Now I can easily connect to remote server via localhost using the terminal: $ mysql -u user.name -p -h 127.0.0.1 -P 3307 Notice that I expicitly identify 3307 as the port, so traffic forwards to the remote server, and hence it logs me in to the remote server. Unfortunately, the localhost/phpmyadmin local login interface doesn't allow you to specify a port option. So I modify the config-db.php file and change the $dbport variable to 3307, under the impression that the phpmyadmin interface will now work with port 3307: $ sudo vim /etc/phpmyadmin/config-db.php $dbport='3307'; Then I restart the mysql server. Unfortunately, it didn't work. When I use the remote credentials to login, it gives me error: #1045 Cannot log in to the MySQL server

    Read the article

  • Ubuntu 13.10 - How to disable LVM and cryptsetup? cryptsetup: evms_activate is not available

    - by NeverEndingQueue
    I am trying to remove whole drive encryption from my Ubuntu installation. I've run Ubuntu from Live CD, mounted crypt partition and copied it to another partition /dev/sda3. sudo cryptsetup luksOpen /dev/sda5 crypt1 sudo dd if=/dev/ubuntu-vg/root of=/dev/sda3 bs=1M After that I've run boot-repair: https://help.ubuntu.com/community/Boot-Repair Added entry to /etc/fstab: UUID=<uuid> / ext4 errors=remount-ro 0 1 Of course I've replaced with blkid result of my /dev/sda3. I've also deleted overlayfs and tmpfs lines from /etc/fstab. (I've just compared it to content of /etc/fstab in non-encrypted Ubuntu installation and could not find overlayfs and tmpfs). I've chrooted from LiveCD into my system and rebuilt initramfs: http://blog.leenix.co.uk/2012/07/evmsactivate-is-not-available-on-boot.html I've also removed cryptsetup using apt-get remove. Basically I can easily mount my system partition from Live CD (without setting up the encryption and LVM stuff), but can not boot from it. Instead I see: cryptsetup: evms_activate is not available When I've chosen the Recovery mode I've seen this: Begin: Mounting root file system ... Begin: Running /script/local-top ... Reading all physical volumes. This may take a while ... No volume groups found cryptsetup: evms_activate is not available Begin: Waiting for encrytpted source device ... My /etc/crypttab is empty. I am pretty sure that system tries to find encrypted partition, search for LVMs etc. Do you have ideas what could be the problem or how can I fix it? Thanks

    Read the article

  • dell latitude XT touch screen issue

    - by Jake
    yesterday I installed windows 7 ultimate after I had the enterprise version on this machine for a few months. the installation went smoothly and everything worked fine except for the finger detection of the screen(only the pen worked) and the bottom screen buttons for rotating and so. so I started to install all the missing drivers dell recommends for this model on their site, but once I tried to install N-Trig degitizer driver the installation said it was unsuccessful and since then the touch stopped working completely! I tried system restore but it didn't help so I went on and formatted the harddisk completely once again and installed windows but that didn't work either. I tried to install the N-Trig's driver but it was alarting for a fatal error and said that no device was detected. same story with N-Trig rollback. so I checked the device manager and saw an "unknown device" with VID and PID values set as 0000. I figured the N-Trig driver might have messed with the device firmware or something and now it doesnt know it's ID and munufactor... Is there anything that can be done? like forcing the N-Trig driver to install on this device or something? please help!

    Read the article

  • How do you recreate the System Recovery environment in Windows 7?

    - by Howiecamp
    I'm running Windows 7 Home Premium RTM (64-bit) and I want to take advantage of the system recovery tools (eg the Command Prompt) without using the Windows 7 DVD. My understanding is that this environment (WinRE) should be installed to your HDD by default as part of the Windows 7 installation. However, when I hit F8 on boot and select "Repair", I get: Windows failed to start. A recent hardware or software change might be the cause. To fix the problem... Status: 0xc000000e Info: The boot selection failed because a required device is inaccessible. The "Info" line seems like the smoking gun. My next step was to boot from the Windows 7 DVD, and choose "Repair". It indicated my Recovery Environment wasn't on the Windows 7 boot menu (perfect) and offered to fix it. I said yes and rebooted, however same issue as above. In addition, when I booted in to Windows 7 and I looked at the boot menu options, the recovery/repair option was not there. Only my Windows installation. Finally, I ran the Disk Management tool (diskmgmt.msc) and took a look at the contents of my "System Reserved" partition (which was set to "Active" as normal). It's unclear to me what the contents should look like, however it is my understanding that the WinRE environment gets installed to this partition. (As part of the above troubleshooting I followed http://superuser.com/questions/25728/how-to-fix-windows-7-boot-process which lead to http://www.sevenforums.com/tutorials/668-system-recovery-options.html).

    Read the article

  • Recommended programming language for linux server management and web ui integration.

    - by Brendan Martens
    I am interested in making an in house web ui to ease some of the management tasks I face with administrating many servers; think Canonical's Landscape. This means doing things like, applying package updates simultaneously across servers, perhaps installing a custom .deb (I use ubuntu/debian.) Reviewing server logs, executing custom scripts, viewing status information for all my servers. I hope to be able to reuse existing command line tools instead of rewriting the exact same operations in a different language myself. I really want to develop something that allows me to continue managing on the ssh level but offers the power of a web interface for easily applying the same infrastructure wide changes. They should not be mutually exclusive. What are some recommended programming languages to use for doing this kind of development and tying it into a web ui? Why do you recommend the language(s) you do? I am not an experienced programmer, but view this as an opportunity to scratch some of my own itches as well as become a better programmer. I do not care specifically if one language is harder than another, but am more interested in picking the best tools for the job from the beginning. Feel free to recommend any existing projects except Landscape (not free,) Ebox (not entirely free, and more than I am looking for,) and webmin (I don't like it, feels clunky and does not integrate well with the "debian way" of maintaining a server, imo.) Thanks for any ideas!

    Read the article

  • Cisco ASA user authentication options - OpenID, public RSA sig, others?

    - by Ryan
    My organization has a Cisco ASA 5510 which I have made act as a firewall/gateway for one of our offices. Most resources a remote user would come looking for exist inside. I've implemented the usual deal - basic inside networks with outbound NAT, one primary outside interface with some secondary public IPs in the PAT pool for public-facing services, a couple site-to-site IPSec links to other branches, etc. - and I'm working now on VPN. I have the WebVPN (clientless SSL VPN) working and even traversing the site-to-site links. At the moment I'm leaving a legacy OpenVPN AS in place for thick client VPN. What I would like to do is standardize on an authentication method for all VPN then switch to the Cisco's IPSec thick VPN server. I'm trying to figure out what's really possible for authentication for these VPN users (thick client and clientless). My organization uses Google Apps and we already use dotnetopenauth to authenticate users for a couple internal services. I'd like to be able to do the same thing for thin and thick VPN. Alternatively a signature-based solution using RSA public keypairs (ssh-keygen type) would be useful to identify user@hardware. I'm trying to get away from legacy username/password auth especially if it's internal to the Cisco (just another password set to manage and for users to forget). I know I can map against an existing LDAP server but we have LDAP accounts created for only about 10% of the user base (mostly developers for Linux shell access). I guess what I'm looking for is a piece of middleware which appears to the Cisco as an LDAP server but will interface with the user's existing OpenID identity. Nothing I've seen in the Cisco suggests it can do this natively. But RSA public keys would be a runner-up, and much much better than standalone or even LDAP auth. What's really practical here?

    Read the article

  • DPM server 2010 Attach agent error : administrator privileges missing?

    - by Michael
    I’m hoping you would be able to help me out with this little problem I’m having. I installed DPM 2010 in our test environment to test backups on Exchange 2010 servers. The environment includes : 1xDC 2x Exchange Server 2010 1x DPM 2010 server All of these are running on Microsoft server 2008 R2 Virtual machines. The host machines are using Hyper-v. So the problem goes like this : 1- I tried to install the agents from the DPM server GUI, which failed saying I didn’t have the correct permissions. 2- So then I tried the manual installation using the commands from : the Microsoft site http://technet.microsoft.com/en-us/library/bb870935.aspx 3- The agent installation worked but when I get to attaching the agents to the DPM server it still gives me the error saying that the specified account does not have administrator rights. 4- I tried the Domain admin, users who are domain admin + local admin, single local admins. 5- I have turned off the windows firewall and made sure all the services are running. So now I’m out of ideas and really need help, the agent attach to the DPM server is the last thing that is holding me back from deploying everything to the production site. Any help would be really appreciated.

    Read the article

  • Permission denied (publickey,gssapi-with-mic,password) ssh error

    - by zentenk
    Heads up I'm a noob with linux and networking. I set up a ubuntu server and I have a static ip for my network. When I try to connect to the server at home (external), it prompts me to log in. I supply the correct password (or incorrect pw), I get the error Permission denied, please try again. and after 3 times I get Permission denied (publickey,gssapi-with-mic,password) I am however able to connect with SSH from another computer in the same network with ssh < internal ip of server > I'm connecting with mac os x and my config file is vanilla. Note: During installation of ubuntu it says I don't have a default route or something while doing auto network configuration, but I ignored it and continued the installation, could this be the problem? EDIT: I have tried the below, I have nothing in hosts.allow and also iptables shows the ports that I have allowed, which is 22. I checked the auth.log, and there is nothing when I connect to it remotely (even when it says permission denied). I have tried connecting to it internally and the correct authentication logs show. Any idea whats wrong?

    Read the article

  • Upgrading PHP, MySQL old-passwords issue

    - by Rushyo
    I've inherited a Windows 2k3 server running an XAMPP-installation from the stone age. I needed to upgrade PHP to facilitate an upgrade to MediaWiki to facilitate a new MediaWiki extension (to facilitate some documentation to facilitate doing my job to facilitate getting paid to facilit... you get the idea). However... installing a new version of PHP resulted in PHP's MySQL libraries refusing to communicate using MySQL's 'old style' 152-bit passwords. Not a problem in theory. The MySQL installation is post-4.1, so it should have the functionality to upgrade the user's passwords from 152-bit to 328-bit (what a weird hashing algorithm...). I ran the following: SET PASSWORD = PASSWORD('foo'); on MySQL but querying: SELECT user, password FROM mysql.user; returned just the same password I started out with - 152-bit. Now... I suspect you're thinking 'AHA! old-passwords is on!'. Unfortunately it's not - I've disabled it in the configuration (explicitly set it to 0), made doubly sure I have an absolute reference to that configuration file and ensured the service isn't using the --old-passwords flag. The service was reset after each and every operation. So I went onto another system and generated the 328-bit hash on there, copying the hash over to the first MySQL instance. Unfortunately, that didn't work either (I did remember to FLUSH PRIVILEGES). The application error is: "'mysqlnd cannot connect to MySQL 4.1+ using the old insecure authentication. Please use an administration tool [...snip...] Is there anything else I can try to get PHP to recognise MySQL as not using the 'old insecure authentication'? MySQL seems to be stuck in 'old-passwords' mode and I can't get it out of it.

    Read the article

  • Cacti Login Page: Infinite loop occurs

    - by beicha
    Apache 2.2.3 | PHP 5.1.6 | MySQL 5.0.77 I followed cacti installation guide to install latest cacti 0.8.7h on CentOS 5.5 (64-bit). The installation of PHP/Apache/MySQL went smoothly until I finished the setup, and came to the login page. I can login http://.../cacti/index.php with admin account but the new page is redirected to the same login page with the message "Please enter your Cacti user name and password below" This is a infinite loop! If I use a wrong admin password I get the correct error message "Invalid User Name/Password Please Retype". [Same problem here] If I login use Guest/guest account, "Error: Access Denied, user account disabled." displays. The Cacti log file (./cacti/log/cacti.log) is empty. I Googled and seems this problem has existed for a long time, but no followup solutions were found on the forum posts I found. Anyone can help me on this problem? If more information needed, please let me know. Nov 18, 2011 UPDATE: I re-installed Cacti, this question remains UNSOLVED.

    Read the article

  • Win7 Hangs During App Install/Upgrade/Uninstall

    - by JadeMason
    I have a custom built PC that intermittently hangs when installing, uninstalling, or upgrading applications. Technical Specs ASUS P5E w/ WiFi Motherboard Intel Core2 Quad Q6600 Processor 4x 2GB G.Skill DDR2 800 SDRAM ASUS EAH2900XT / Radeon HD 2900XT 512MB Video Card Under normal operation the machine runs reliably, even under heavy load, such as video transcoding. The temperature never gets anywhere near where I would worry about it. However, the machine regularly hangs (complete lockup, no response to keyboard or mouse, no activity on-screen) when either installing a new application, uninstalling an existing application, or applying patches to existing applications or the OS. This is extremely frustrating as this machine is primarily used as a HTPC. Several apps are configured for automatic updates, and these updates sometimes cause the machine to lockup while we are watching content on the PC. In previously investigating this issue, I found one likely problem could be my Logitech Webcam. The Logitech software has a bug that leaves an entry in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\ Control\SessionManager\PendingFileRenameOperations Which references the Temp directory. My registry contained this error, so I uninstalled the webcam software and deleted this registry key value. Unfortunately, the machine will still intermittently hang. I've noticed that the hangs always happen when an install/upgrade/uninstall requires elevated privileges (presumably to modify the registry). I can typically get at least one install/upgrade/uninstall to complete after a reboot, but after that it is a game of russian roulette to see if the operation will succeed or hang the machine. The event log is not helpful, as log messages end at the time of the hang, with no record of a warning or error. My only recourse when the machine hangs in this way is to perform a hard reset/power cycle. Any tips on how to further debug this issue are greatly appreciated.

    Read the article

  • Write once, read many (WORM) using Linux file system

    - by phil_ayres
    I have a requirement to write files to a Linux file system that can not be subsequently overwritten, appended to, updated in any way, or deleted. Not by a sudo-er, root, or anybody. I am attempting to meet the requirements of the financial services regulations for recordkeeping, FINRA 17A-4, which basically requires that electronic documents are written to WORM (write once, read many) devices. I would very much like to avoid having to use DVDs or expensive EMC Centera devices. Is there a Linux file system, or can SELinux support the requirement for files to be made complete immutable immediately (or at least soon) after write? Or is anybody aware of a way I could enforce this on an existing file system using Linux permissions, etc? I understand that I can set readonly permissions, and the immutable attribute. But of course I expect that a root user would be able to unset those. I considered storing data to small volumes that are unmounted and then remounted read-only, but then I think that root could still unmount and remount as writable again. I'm looking for any smart ideas, and worst case scenario I'm willing to do a little coding to 'enhance' an existing file system to provide this. Assuming there is a file system that is a good starting point. And put in place a carefully configured Linux server to act as this type of network storage device, doing nothing else. After all of that, encryption on the files would be useful too!

    Read the article

  • Provider claiming "all web servers in the cloud are automatically kept in sync" - should I be skeptical?

    - by RobMasters
    I'm no expert in cloud computing - I've spent a fair bit of time researching it and various providers but am yet to get any hands-on experience with it. From what I've read about AWS and auto-scaling EC2 instances though, it seems as though each instance should be completely decoupled from all other instances. i.e. If content is uploaded to the web server's local filesystem from a custom CMS backend then that content won't be available if subsequently requested from a different web server in the auto-scaling group. Is that right? I met with a representative of our existing hosting provider recently and he was claiming that it isn't a problem that our legacy CMS system is highly dependent on having a local filesystem. He said that all web servers, regardless of how many, would be kept as exact duplicates so I shouldn't notice any difference compared to our existing setup of a single dedicated server. This smells a little too much like bull fecal-matter to me...should I be skeptical about this? I'm a little worried because my (non-technical) boss who ultimately makes the decisions is all for signing up to this cloud solution because it won't require any extra work. I'm sure that they must at least be able to provide this, otherwise they wouldn't be attempting to sell it to us. But at what cost? It sounds as though each web server will always need to be checking the other web server(s) for new static content, which to me sounds like unwanted overhead that'll slow things down. I'd really appreciate it if somebody could clear this up to me. I'm all for switching to AWS and using S3+CloudFront for all static content, but that isn't looking very likely to happen at the moment.

    Read the article

  • Debian Wheezy (testing) df reported volume size

    - by TheRoadrunner
    I am a bit confused about the /dev/sda* references since I installed Wheezy instead of Squeeze on a testing box. fdisk -l returns: Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e9623 Device Boot Start End Blocks Id System /dev/sda1 * 2048 480278527 240138240 83 Linux /dev/sda2 480280574 488396799 4058113 5 Extended /dev/sda5 480280576 488396799 4058112 82 Linux swap / Solaris This seems correct. But df -h /dev/sda (and /dev/sda1 and /dev/sda2 and /dev/sda5) returns: Filesystem Size Used Avail Use% Mounted on udev 10M 0 10M 0% /dev The same happens with every entry under /dev/disk/by-id and /dev/disk/by-path. Only one of two entries under /dev/disk/by-uuid returns the correct volume size: df -h /dev/disk/by-uuid/cacdbad6-7e6b-4e80-84ba-e3c77ef48796 Filesystem Size Used Avail Use% Mounted on /dev/disk/by-uuid/cacdbad6-7e6b-4e80-84ba-e3c77ef48796 229G 22G 196G 11% / Contents of /etc/fstab: # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sda1 during installation UUID=cacdbad6-7e6b-4e80-84ba-e3c77ef48796 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=45840d13-ee36-4e77-8e73-16cbdff25eb1 none swap sw 0 0 /dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0 /dev/fd0 /media/floppy0 auto rw,user,noauto 0 0 It seems all other references than the uuid points to the swap partition. Is this because Wheezy is in testing, and should it be reported as an error?

    Read the article

  • SQL Server 2005 SE SP3 on Windows Server 2008 R2 x64 premature query disconnections

    - by southernpost
    New Dell PowerEdge R910, 4x8 Intel X7560, 192GB RAM, hardware NUMA, local RAID, Broadcom NetExtreme II multiport NIC, unteamed, TCP Offload disabled, RSS disabled, NetDMA disabled, Hyperthreading disabled. SQL Server 2005 SE x64 SP3 on Windows Server 2008 R2 EE x64. No other apps on server. Max Mem = 180GB, Max DOP = 4. Existing Windows Server 2003 R2 EE x64 app server connecting to Dell via firewall using SQL Authenticated logins. Symptoms: Intermittent errors at the app server: A transport-level error has occurred when sending the request to the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.) Findings: Running queries from SSMS located on another machine within the same domain as the SQL Server run without error. SQLIO showed good performance. Windows and SQL logs show no related messages. Microsoft reveiwed PssDiag trace and stated that "We are not seeing timeouts from SQL Side. The queries bring run against the database are timing out within 9secs. This is a database connectivity error." "we can also see from the AttnSeq column that we are also not seeing any Attentions from the SQL Side.". Dell has confirmed that we are using the latest Broadcom drivers.

    Read the article

  • Windows drive letters A: and B:

    - by Workshop Alex
    This is a question that just popped into my mind and I can't help but wonder why it's still common for a Windows installation to be installed on C: with all other drive letters going up from D: to Z:. In the early MS-DOS times, all we had were floppy disks and they were at A:. When the 3.5 inch floppy started to replace the 5.25 floppy, many people had an A: and B: drive. Then the hard disk became popular and the hard disk was at C: because A: and B: were taken. Then the 5.25 floppy disappeared and most computers had a gap between A: and C:. Nowadays, the 3.5 floppy is just too outdated so A: disappeared too. All disks now start at C:. Yeah, I know I can assign my own drive letters and I've done so with my data disks. My installation disk will just continue to be stuck at C: and I don't really mind. I have no problems with drive letters. But why do the new Windows versions just continue to install themselves by default on C: instead of assigning the letter A: to the boot hard disk?

    Read the article

  • Some doubts about the use of usermod and groupmod command

    - by AndreaNobili
    I am not yet a true "Linux guy" and I have the following doubts about how exactly do the following shell procedure (a list of commands steps) founded in a tutorial that I am following (I want deeply understand what I am doing before do it): sudo passwd root then login again as root usermod -l miner pi usermod -m -d /home/miner miner groupmod -n miner pi exit So at the beginning it enable the root account and I have to login again in the system as root...this is perfectly clear for me. And now I have the followings doubts: 1) The usermod command: usermod -l miner pi usermod -m -d /home/miner miner Reading the official documentation of the usermod command I understand that this command modify the informations related to an existing account Reading the documentation it seems to me that the -l parmether modify the name of the user pi in miner and then the -m -d paramether move the contents of the old home directory to the new one (named miner) and use this new directory as home directory My doubt is: what exactly do the executions of these operation? I think that: Rename the existing pi user in miner Then move the content of the old home directory (the pi home directory? or what?) into a new directory (/home/miner) that now is the home directory for the miner user. Is it right? The the second doubt is related to this command groupmod -n miner pi It seems to me that change the group name from pi in miner But what exactly is a group in Linux and why is it used? Tnx

    Read the article

  • Windows 7 - Can't get my TV working as primary display with nVidia 7900GS

    - by Daniel Schaffer
    I just installed Windows 7 Ultimate 64 RTM (from MSDN) on my HTPC, which is connected to a 42" Magnavox LCD TV via component cables to my nVidia 7900GS. Everything was fine through the installation until I went to install the official driver from nVidia. Towards the end of the installation, the TV blinked off and wouldn't come back on. I went and got an LCD monitor and plugged it into a DVI port and the monitor came right up, but was automatically selected as the primary display. Now, if I set the TV to be the primary display, the TV just blanks until I hit escape to cancel the "settings have changed, do you want to keep them" dialog. Any suggestions? Update: I'm able to set the TV as the primary display using the Windows 7 "screen resolution" configuration panel. However, if I try to remove the LCD monitor either by unplugging it or using the configuration, the TV blanks out again. Update 2: This setup was working correctly in Vista Home Premium 32-bit. Update 3: I've uninstalled the nVidia driver and am using the driver that Windows Update installed. As much as this offends my geek sensibilities (must use the "right" driver!!), well, It Works™.

    Read the article

  • Re-packaging commercial software into RPM packages

    - by gac
    The situation is this - I have a small CentOS 5 "cluster" (currently 7 machines, but potential for more) which run a commercially available software package that's distributed essentially in tarball format (it's actually a zip file with a mixture of Windows/Linux binaries and an installation shell script with no potential for automation). I'd like to re-package this somehow into an RPM package (ideally that I can throw onto a self-hosted yum repository) in order to keep these "cluster" machines both up to date and consistent. I could do 7 manual installations, but there's scope for error. As I understand it, I'll need to accomplish the following tasks: add a non-privileged user to the target system for running the daemon without unnecessary root privileges package the binary files themselves up from the final installation location on a separate build machine (probably under /opt/package for sanity's sake). No source is available. add a firewall hole in order for the end-users to be able to communicate with the "cluster" nodes add a cron task which can start the daemon on @reboot I'm coming up with plenty of good packaging resources so far, but all are based on the traditional method (i.e. if I were the vendor packaging up my source files), rather than re-packaging a ton of binary files from an already-installed instance of the application, which is the only option available to me. Anyone have any good resources they can share for achieving this goal? Thanks!

    Read the article

  • Will a SQL Server client alias survive a sysprep?

    - by shufler
    I want to sysprep a Windows Server 2008 R2 SP1 machine that has SQL Server 2008 R2 SP1 installed (for reference, SQL Server 2008 R2 has a new sysprep feature that allows the instance to be sysprepped). On the server is a SQL Server client alias that points to the default SQL Server database engine instance. For reference, the alias is called Alias-SQLServer and has been configured in both 32-bit and 64-bit cliconfig versions (that is, both registry keys exist) The alias points to the local instance as the image will be used to create development VMs and the installation script for the application that is being developed will use the SQL Server client alias in order to generalize the installation scripts. I can't seem to find information about whether the sysprep tool will update the SQL Server client alias's registry keys with the server's new name once it's unsealed. My guess is that it is not; how is sysprep to know that the server name the alias points to will be different for each image? Right? Perhaps if the alias points to localhost instead of the server name this will work?

    Read the article

  • Mass-migrating from POP3 to Exchange 2010, how do I copy mailboxes?

    - by Erik P. Skaalerud
    I'm in the process of planning our migration from an internal hosted POP3-server (dovecot) to Exchange 2010. We're using Outlook 2003 for the moment, but will soon upgrade to Outlook 2010. The big problem is that we have about 50 computers here in our HQ, plus ~30 clients in branch offices (wich will get their Exchange migration later sometime). I'm the only IT personel, and having to go around and manually set up Outlook and copy over their PST contents is not a option I'm looking for. Some users have set outlook to keep messages for X number of days on the POP3 server, others have not. Using a POP3 connector to transfer over the mails is not a viable option. Here is what I've done so far: Created a transform for the Office 2003 administrative installation point Created a .PRF file to modify any existing e-mail account to switch over to Exchange (including the RPC-encrypt hotfix described in MSKB 2006508) Tested both transform and PRF, both works Created a test-OU and GPO containing the Office 2003 installation with transform applied, also works My big question is: How can I force Outlook to import any existing .PST into the new Exchange mailbox when the user starts up Outlook for the first time after the MST/PRF have been applied? Is this possible?

    Read the article

  • What Device/System to use as a "router on a stick"

    - by Jeff Leyser
    I need to create several distinct VLANs, and provide a way for traffic to move between them. A "router on a stick" approach seems ideal: Internet | Router with Trunking Capability ("router on a stick") * * Trunk between router and switch * Switch with Trunking Capability | | | | | | | | | | | LAN 2 | LAN 4 | | 10.0.2.0/24 | 10.0.4.0/24 | | | | LAN 1 LAN 3 LAN 5 10.0.1.0/24 10.0.3.0/24 10.0.5.0/24 We have trunk-capable Layer-2 switches. The question is what to use as the router on a stick. My choices seem to be: 1) Use an existing Cisco 5505 ASA firewall. It appears the ASA can do the routing, but it's a 100Mbps device, and so seems sub-optimal at best 2) Buy a router. This seems overkill. 3) Buy a Layer-3 switch. Also seems overkill. 4) Use an existing Linux Box as a router 5) Use a new Linux box as a router' 6) Something I'm not thinking of I think either (4) or (5) is my best option, but I'm not sure how to choose between them. I expect the amount of traffic that has to cross the VLANs to be somewhat small, but bursty. How much load does routing add to a CentOS machine?

    Read the article

  • Command line switching

    - by Larry
    I have read through some suggestions but I am just not technical enough to get this I think. I am a CAD designer and each file has 5 files associated with it. I have 3 sets of 5 files, and each set needs to go into its own zip file, placed on a separate server. For example: "C:\Program Files\7-zip\7z.exe" a file1.zip "O:\server2\map files\BC\BC.d*"-0 "C:\Program Files\7-zip\7z.exe" a file2.zip "O:\server2\map files\BC\ON.d*"-0 "C:\Program Files\7-zip\7z.exe" a file3.zip "O:\server2\map files\BC\AB.d*"-0 and I am in directory "S:\server\map files\provinces" (for example). These lines run within an existing batch file and by the time it reaches the 3 lines above, it's in the S: directory sample above. So it's looking on my pc for the 7-zip program, creating 3 zip file names which it does, but places those zip files on a separate server which it doesn't and the first zip file also includes all the other 10 files, the second zip file the same plus the first zip file, and the third the same with the other two zip files making me think the code isn't recognizing the part after file1.zip where I am trying to tell it what files to include and where to place the zip files. Ultimately, I want to either have the system create a new zip file if the old one was deleted, or copy the new files into the existing zip and overwrite any older files, and for these zip files to be placed in a separate location which is where we share our files with other personnel from within our company. The S: drive is for all originals, and O: is for sharing. Is there a list of all switching options with many different samples?

    Read the article

  • How to make Microsoft JVM work on Windows 7?

    - by rics
    I am struggling with the following problem. I cannot install MS JVM 3810 properly on Windows 7. When I start Interner Explorer 8 without starting any java 1.1 programs choosing Java custom settings under Internet options causes the crash of the browser. I have some Java 1.1 programs that work well in Internet Explorer 8 on Windows XP after the installation of MS JVM 3810. I know that it is not advised to use this old JVM but it is not a short-term option to port the programs in newer Java since it contains 3rd party components. Complete rewrite is a long-term plan. Strangely jview and appletviewer (jview /a) works from a console so the MS JVM 3810 is not completely busted just IE 8 does not like it. The problem with the appletviewer is that it cannot connect to the server even if both signed and unsigned content in Java custom settings have been set to Enable all. (Since Java custom settings was unreachable due to the crash the modifications - including My computer - were performed through the registry and pre-checked to behave correctly on Windows XP and Internet Explorer 8.) If jview was working then I could at least think of a workaround. Is there a way to configure MS JVM or jview properly on Windows 7? Another options would be: Checking Internet Explorer 9 Beta. Using virtualbox and Windows XP older IE in it. Delaying Windows 7 upgrade. ... Update Finally we have modified all the programs to work parallelly as applet and application as well. This way the programs can still be used from browser on older Windows versions. On Windows 7 the applications are started from the desktop. Installation to all user machine can easily be solved since they already have a large common application drive. The code update is fortunately only a few lines of modification: including a main method in the applet class. Furthermore instead of the starting html page a bat file is used to set the classpath before the startup with jview.

    Read the article

< Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >