Search Results

Search found 15798 results on 632 pages for 'authentication required'.

Page 488/632 | < Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >

  • How can I get my routers to forward ports correctly?

    - by Giffyguy
    My network currently looks like this (simplified): Note that Router #2 is connected to the LAN interface of Router #1. This should be familiar to anyone who has seen a standard static-IP setup with an additional firewall for a residence or other small building. Router #1 is actually my cable gateway, but since it is a fully functional router/firewall, I am going to refer to it as a router. Now, I need to open various ports in both firewalls for incoming communication to my server - port 80 is a good example. So I've opened up port 80 in Router #2, and so far all incoming traffic at the public IP X.X.X.129 is being routed correctly. The problem is that I also need my server to respond to incoming traffic at the public IP X.X.X.130 on the WAN interface of Router #1. Naturally, I can't just tell Router #1 to forward port 80 to another public IP. Port forwarding is only supported when the traffic is being directed to the LAN subnet. I am willing to restructure my network topology if required, with the following conditions: Router #1 cannot have its WAN IP reassigned - X.X.X.130 is mandatory. Router #1 cannot be moved or disconnected from the cloud. The server cannot be given a second IP address. I would prefer the server to have a private IP address - e.g. 10.0.0.10 I'd like to keep Router #2, but it can have a private IP - e.g. 10.0.1.10 Following these rules, I need to get my server to receive incoming traffic on port 80 from both public IP addresses. Does anyone on SU know if this is possible? So far my only theories have been to set up a static route on either router, or to somehow combine my two subnets into a single subnet.

    Read the article

  • Is it possible to re-cab an Administrative Install Point?

    - by Nathaniel Bannister
    We have Acrobat 8 Pro at work, and our media was painfully out of date. Rather than install all of the machines at 8.0.0 and then do the 6 or 7 consecutive reboots adobe expects you to be ok with I decided I'd integrate the .msp files into the installer. After reading up on it, I figured out the exact patch order that adobe required, extracted my cd to an Administrative install point, and ran the patches against it: msiexec /a AcroPro.msi /p AcrobatUpd810_efgj_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd811_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd812_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd813_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd816_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd817_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd820_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd822_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd823_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd825_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd826_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" Now I have a AIP that is fully patched to 8.2.6 (Tested working prior to attempting to CAB it), but is absolutely huge (1.2gb) what I would like to do is take the folders within the AIP and put them back into a cab file for the sake of convenience in transferring the files around. I tried the command: cscript "C:\Program Files\Microsoft SDKs\Windows\v7.0\Samples\sysmgmt\msi\scripts\WiMakCab.vbs" AcroPro.msi Data1 /L /C /S Per the guide I was using, while this did produce the cab file I Wanted, however the resulting MSI fails to install with an error 2602: It's been a while since I've done something like this, and it's probably a glaring oversight on my part, but any insight would be much appreciated.

    Read the article

  • On Windows XP, Start > Run "My Documents" sometimes doesn't work

    - by Clayton Hughes
    On all of my home computers, I can enter "my documents" into the Start Run prompt and the My Documents folder of the current profile will open up. What's more, I can continue typing subfolders, files, etc. and auto-complete works and it's smart and enjoyable. I can't check at the moment, but I'm almost positive entries like "My Pictures" and "My Music" also go to their correct folders. On my work computers, if I enter "my documents" into the Start Run prompt, I get the following error: "Windows cannot find 'my'. Make sure you typed the name correctly, and then try again. To search for a file, click the Start Button, and then click Search." I can sort of circumvent this by creating a shortcut in my PATH named 'my' that points to My Documents folder, but this doesn't solve the auto-complete option (and it's otherwise imperfect, of course, because "my pictures" or "my music" all direct to the same place. A google search doesn't provide much help on this, although it does identify a poster in 2007 with this same question at another board: http://www.msfn.org/board/lofiversion/index.php/t124813.html (Login required, but Google cache available here: http://preview.tinyurl.com/ygxhwwl) Is this just a limitation of the networks belonging to a domain, or is there some way I can get this functionality back? My documents folder does live in the standard place (C:\Documents and Settings{username}\My Documents), and not on a network drive or anything. It's probably worth adding that the computers are part of some freakish Novell domain thing, too. I'm not in IT here so I'm not too up on the details. Thanks for any help/suggestions!

    Read the article

  • Setting up CIFS ISO Repository for Xen

    - by user85610
    I recently started working with Xen, to try to make better use of an extra desktop box for development testing. I'd like to be able to do OS installs on it without having to burn discs, but I'm having some trouble actually being able to get it to boot OS ISOs from a Windows share. My Windows box is running Win 7, and it's on a domain. I created a CIFS ISO SR in Xen, specifying the correct username and password to use. Xen is able to scan the share, and I see the ISOs that are in the folder, and can select them in the list in XenCenter. However, when I try to start the VM, I get "Error: Starting VM 'linxcentos' - INVALID_SOURCE - Unable to access a required file in the specified repository: file:///tmp/cdrom-repo-hIz-H7/isolinux/vmlinuz." I tried booting a different Linux ISO and got the same result. I know that the ISOs are valid because I was able to install them without issue when I tried VMWare ESXi earlier. What am I missing here? It's Xen/XenCenter 6 and I'm trying to install the newest version of Centos. I may end up burning it for now, but I'd like to get this to work, at least just for the principle of not letting mysterious behaviors go unsolved...

    Read the article

  • Managing a test iSCSI target server

    - by dyasny
    Hi all, I am using a RHEL server with a few hard drives, and tgtd as the iscsi target software. I a looking for a way to allocate and deallocate space and targets with that space, without restarting my system, or harming other LUNs. Currently, all my HDDs are PVs in a single VG, and I lvcreate/lvremove as required, and then export the allocated LVs using a tgt script: usr/sbin/tgtadm --lld iscsi --op new --mode target --tid=1 --targetname iqn.2001-04.com.lab.gss:300gb /usr/sbin/tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 -b /dev/mapper/iscsi_vg-iscsi_300Gb /usr/sbin/tgtadm --lld iscsi --op bind --mode target --tid 1 -I ALL /usr/sbin/tgtadm --lld iscsi --op new --mode target --tid=2 --targetname iqn.2001-04.com.lab.gss:200gb /usr/sbin/tgtadm --lld iscsi --op new --mode logicalunit --tid 2 --lun 1 -b /dev/mapper/iscsi_vg-iscsi_200Gb /usr/sbin/tgtadm --lld iscsi --op bind --mode target --tid 2 -I ALL /usr/sbin/tgtadm --lld iscsi --op new --mode target --tid=3 --targetname iqn.2001-04.com.lab.gss:100gb /usr/sbin/tgtadm --lld iscsi --op new --mode logicalunit --tid 3 --lun 1 -b /dev/mapper/iscsi_vg-iscsi_100Gb /usr/sbin/tgtadm --lld iscsi --op bind --mode target --tid 3 -I ALL tgtadm --mode target --op show So in order to remove a LUN, I stop the tgtd service, lvremove the lv, and remove the entry from the iscsi target script When I add a lun, I run lvcreate, and then add an entry to the script and run it. This is not quite optimal, since restarting the service is a bad idea while other LUNs are busy, so I am looking for a more scalable and safer way. Thanks

    Read the article

  • update from debian lenny to squeeze

    - by Daniel
    I'm trying to update from debian lenny to squeeze on my 64bit root server and did the following so far: modifying sources.list apt-get update apt-get upgrade apt-get install linux-image-2.6-amd64 The last step leads to the following error-output: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: linux-image-2.6-amd64: Depends: linux-image-2.6.32-5-amd64 but it is not going to be installed E: Broken packages UPDATE: here's my sources.list deb ftp://mirror.hetzner.de/debian/packages squeeze main contrib non-free deb ftp://mirror.hetzner.de/debian/security squeeze/updates main contrib non-free deb http://ftp.de.debian.org/debian squeeze main non-free contrib deb-src http://ftp.de.debian.org/debian squeeze main non-free contrib deb http://security.debian.org/ squeeze/updates main contrib non-free deb-src http://security.debian.org/ squeeze/updates main contrib non-free How can I fix that safely? thx

    Read the article

  • How to eliminate the downtime when a dynamic IP address changes?

    - by xenon
    We currently have a number of client computers linked up to a database server (MS SQL 2008) for replication. The database server recognises the computers based on their Windows hostname. We are using dynamic IP addresses at this time because we tend to change the computers’ hardware quite frequently, and so the MAC address may be different. Unless static IP has a good way for us to manage frequent changing of MAC addresses, we are keeping it to dynamic IP. The problem with dynamic IP addresses, however, is that when a client fetches an new IP from the DHCP, ie, there is a change in the IP address, there is going to have a downtime for the hostname to reflect the new IP address, the client’s DNS cache of the hostname to reload, and also the server’s DNS cache to reload to see the new IP from the hostname. All of these have different timings and the delay can be really bad at times. Restarting the computer doesn't work all the time too. The clients are on Windows 7. How can I eliminate the amount of downtime required when there is a change in IP in the case of dynamic IP addresses?

    Read the article

  • Is there a small business router that shows bandwidth usage graphs in the admin panel?

    - by Robert Drake
    I support a large number of public libraries that are having their networks upgraded in response to a grant application. These libraries are generally home to between 6-15 computers and have little or no tech services either onsite or contracted remotely. In order to justify current and future purchases, a number of the libraries have requested routers that can provide bandwidth usage graphs that they can show to their managing boards. Is there a small business router that displays traffic graphs in the router administration web interface? The router needs to suppport DHCP and basic firewalling. No other features are required. Further, the reports just need to show overall trends. It is not necessary to show traffic by IP, by protocol/application, or by time of day. They just need an overall week to week, month to month, trend line. I'm familiar with MRTG/PRTG/tools that collect SNMP data from the router, but the libraries don't have the expertise for the configuration. I've considered installing the tomato firmware on some cheap home/home office routers, but if there's a commercial product that can be purchased that would be significantly simpler. Also the library boards would be much more likely to approve the purchase of a commercial product over a 'hacked' one. Any assistance would be appreciated.

    Read the article

  • Mac WLAN 802.11b+g WPA1 connection issues

    - by Peto
    Hi, I have a Telewell TW-EA510v4 ADSL modem+WLAN router configured as follows: Mode: 802.11b+g Security Mode: WPA1 Pre-shared Key WPA Algorithms: TKIP Connections from only certain MAC addresses have been allowed and the MAC address of my Mac is in that list. The WLAN works just fine with iPhone and an old Acer laptop. It has worked for about two months or so with my MacBook Pro (year and a half or so old model). Ocassionally i've had minor problems with it, which have required either reboot of ADSL modem or reboot of my Mac. However now, for the last week or so I haven't been able to connect to it at all. This is what is what i get in the console when i try to connect: 5.5.2010 20.54.53 airportd[73731] Apple80211Associate() failed -3924 (Invalid PMK) 5.5.2010 20.54.53 Apple80211 framework[584] airportd MIG failed (Associate Event) = -3924 (Invalid PMK) (port = 104599) 5.5.2010 20.54.53 SystemUIServer[584] Error joining WLAN-M: Invalid password (-3924 Invalid master key) The pre-shared key I use is not incorrect. I'm 100% sure of that. The Error Log from the router only says this when I try to connect to it: May 05 21:09:54 home.gateway:i802_1x:none: <my mac address> associated May 05 21:10:00 home.gateway:i802_1x:none: <my mac address> disassociated May 05 21:10:01 home.gateway:i802_1x:none: <my mac address> disassociated Any ideas or tips to troubleshoot this further?

    Read the article

  • What could cause a huge packet loss in Ubuntu 9.10, for both wired and wireless?

    - by xzenox
    I was previously using 9.04 fine (and in fact, I am posting this from my old 9.04 live cd). I tested the following install steps in a virtualbox vm prior to following the sames ones to upgrade my laptop: Download/burn ubuntu minimal cd (12mb one) Install ubuntu minimal sudo apt-get update sudo apt-get upgrade sudo apt-get ubuntu-desktop ubuntu-standard In the VM worked fine and I found myself with a working 9.10 ubuntu, network worked fine and I was able to test my backups and DropBox without a hitch (host was 9.04). When I followed the same steps on my laptop, everything worked up to after 9.10 being installed and working. As far as I can tell, everything besides eth0/wireless works. For some reason, I am unable to access the internet. Ping reports that over 99% of packets get lost (over an hour or so of pinging). This means for example that if I try hard enough, I can load a webpage but only at the cost of much patience... This happens both for a wired and wireless connection to my wrt310n (updated with latest firmware). At first I thought that it could be related to the ipv6 issues ppl have been experiencing however even after disabling ipv6 at the kernel level (through grub), I still get the issue. I do not think this is related to DNS issues or the likes since even when I ping my ISP's gateway IP, I have the same amount of packet loss. No DNS resolving should be required there. Access to my router works peachy with no packet loss there. I've tried different MTU values but to no avail. Note that this issue affects every web-enabled application: firefox, ping, synaptic, etc. The same hardware/router combo works with 9.04 but not with 9.10. In fact, when I did: sudo apt-get ubuntu-desktop ubuntu-standard after 9.10 minimal was installed, it downloaded over 400mb of packages without a hitch so my guess is that one of those packages either in ubuntu-desktop or ubuntu-standard is causing havok. Thoughts?

    Read the article

  • How Do I Secure WordPress Blogs Against Elemento_pcx Exploit?

    - by Volomike
    I have a client who has several WordPress 2.9.2 blogs that he hosts. They are getting a deface kind of hack with the Elemento_pcx exploit somehow. It drops these files in the root folder of the blog: -rw-r--r-- 1 userx userx 1459 Apr 16 04:25 default.htm -rw-r--r-- 1 userx userx 1459 Apr 16 04:25 default.php -rw-r--r-- 1 userx userx 1459 Apr 16 04:25 index.asp -rw-r--r-- 1 userx userx 1459 Apr 16 04:25 index.aspx -rw-r--r-- 1 userx userx 1459 Apr 16 04:25 index.htm -rw-r--r-- 1 userx userx 1459 Apr 16 04:25 index.html -rwxr-xr-x 1 userx userx 1459 Apr 16 04:25 index.php* It overwrites index.php. A keyword inside each file is "Elemento_pcx". It shows a white fist with a black background and the phrase "HACKED" in bold letters above it. We cannot determine how it gets in to do what it does. The wp-admin password isn't hard, but it's also not very easy either. I'll change it up a little to show you what the password sort of looks like: wviking10. Do you think it's using an engine to crack the password? If so, how come our server logs aren't flooded with wp-admin requests as it runs down a random password list? The wp-content folder has no changes inside it, but is run as chmod 777 because wp-cache required it. Also, the wp-content/cache folder is run as chmod 777 too.

    Read the article

  • SQL Server 2005 SP3 on Windows 7 - No Management Studio

    - by Mike Thomas
    I've been trying for a day and a half now to get SQL Server 2005, DEV edition, to work on Windows 7, 64 bit prof. I install from the disk, then run SP 3. I get a failure on the Client Components section of the Installation Progress along with this vague message - Product : Client Components Product Version (Previous): 1399 Product Version (Final) : Status : Failure Log File : C:\Program Files\Microsoft SQL Server\90\Setup Bootstrap\LOG\Hotfix\SQLTools9_Hotfix_KB955706_sqlrun_tools.msp.log Error Number : 1712 Error Description : MSP Error: 1712 One or more of the files required to restore your computer to its previous state could not be found. Restoration will not be possible. I've uninstalled all Visual Studio and tried to make this as clean as possible, and have read a lot of the blog posts, but am really at my wits end about this. I am not a DBA, but I use SQL Server all the time when coding and testing apps. Does anyone have any ideas as to where I can get this sorted out? I've been ati this for a long time and have never encountered an installation as bad as this one. Thanks Mike Thomas

    Read the article

  • Adding 2008 Server to 2008 Domain

    - by Phillip
    Hello, I'm trying to create a lab for testing before I deploy solutions, I'm no experienced IT Administrator, and therefore I come here for help. I'm running 2 Virtual Servers on the same machine on a local connection between those two. They'are able to ping each other. Their names is TSDATA1 and TSDATA2 where TSDATA1 is the Domain Controller. I am able to ping between those two, on both "ping TSDATA1" and "ping 10.0.0.1" which is the IP address of TSDATA1. The IP address of TSDATA2 is 10.0.0.2. I'm trying to join the domain with TSDATA2 both I'm getting this error when trying: Note: This information is intended for a network administrator. If you are not your network's administrator, notify the administrator that you received this information, which has been recorded in the file C:\Windows\debug\dcdiag.txt. The following error occurred when DNS was queried for the service location (SRV) resource record used to locate an Active Directory Domain Controller for domain tsdata.local: The error was: "DNS name does not exist." (error code 0x0000232B RCODE_NAME_ERROR) The query was for the SRV record for _ldap._tcp.dc._msdcs.tsdata.local Common causes of this error include the following: The DNS SRV records required to locate a AD DC for the domain are not registered in DNS. These records are registered with a DNS server automatically when a AD DC is added to a domain. They are updated by the AD DC at set intervals. This computer is configured to use DNS servers with the following IP addresses: 10.0.0.1 One or more of the following zones do not include delegation to its child zone: tsdata.local local . (the root zone) For information about correcting this problem, click Help. I've figured out it has something to do with DNS lookup, but I have no clue what to do. Can anyone help?

    Read the article

  • Explorer.EXE ordinal 423 not found in urlmon.dll after updates/IE8 install

    - by Zoot
    Setting up a brand new Dell Optiplex 980 with Windows XP SP3, and everything started up fine on the first boot. My first task was to install system updates, including IE8 and WGA. After the required reboot after installing updates, I now get this error message: Explorer.EXE Ordinal not found. The ordinal 423 could not be located in the dynamic link library urlmon.dll Per my cursory Google search, this forum thread places the blame squarely on IE8. The solution provided is to enter safe mode and remove IE8. Unfortunately, when I press F8 to choose to boot safe mode, I only have the option of "Windows XP SP3 Professional" and no safe mode options. Any other ideas? Thanks in advance. FYI, I can get to the Windows Task Manager by holding down Control-Alt-Delete, but programs don't seem to run properly if you select them. I tried chatting with Dell Support, and we tried to initiate the system restore at c:\windows\system32\restore\rstrui.exe, but that had a similar "ordinal 423 not found in urlmon.dll" error.

    Read the article

  • Compiling Gearman PHP Library for CentOS 5.8

    - by Andrew Ellis
    I've been trying to get Gearman compiled on CentOS 5.8 all afternoon. Unfortunately I am restricted to this version of CentOS by my CTO and how he has our entire network configured. I think it's simply because we don't have enough resources to upgrade our network... But anyways, the problem at hand. I have searched through Server Fault, Stack Overflow, Google, and am unable to locate a working solution. What I have below is stuff I have pieced together from my searching. Searches have told said to install the following via yum: yum -y install --enablerepo=remi boost141-devel libgearman-devel e2fsprogs-devel e2fsprogs gcc44 gcc-c++ To get the Boost headers working correctly I did this: cp -f /usr/lib/boost141/* /usr/lib/ cp -f /usr/lib64/boost141/* /usr/lib64/ rm -f /usr/include/boost ln -s /usr/include/boost141/boost /usr/include/boost With all of the dependancies installed and paths setup I then download and compile gearmand-1.1.2 just fine. wget -O /tmp/gearmand-1.1.2.tar.gz https://launchpad.net/gearmand/1.2/1.1.2/+download/gearmand-1.1.2.tar.gz cd /tmp && tar zxvf gearmand-1.1.2.tar.gz ./configure && make -j8 && make install That works correctly. So now I need to install the Gearman library for PHP. I have attempted through PECL and downloading the source directly, both result in the same error: checking whether to enable gearman support... yes, shared not found configure: error: Please install libgearman What I don't understand is I installed the libgearman-devel package which also installed the core libgearman. The installation installs libgearman-devel-0.14-3.el5.x86_64, libgearman-devel-0.14-3.el5.i386, libgearman-0.14-3.el5.x86_64, and libgearman-0.14-3.el5.i386. Is it possible the package version is lower than what is required? I'm still poking around with this, but figured I'd throw this up to see if anyone has a solution while I continue to research a fix. Thanks!

    Read the article

  • /usr/bin/python (Python 2.4) was deleted on CentOS 5. I compiled from source but yum is still broken. How can I get everything back to the way it was?

    - by Maxwell
    I saw a lot of other questions like this but none of them answered the exact part I am having trouble with (actually installing the Python RPM). Someone on my system deleted /usr/bin/python and /usr/bin/python2.4 on my 64 bit CentOS 5.8 installation. I recompiled Python 2.4 from source, but now whenever I try to yum install anything I get the following error: [root@cerulean-OW1 ~]# yum install httpd There was a problem importing one of the Python modules required to run yum. The error leading to this problem was: No module named yum Please install a package which provides this module, or verify that the module is installed correctly. It's possible that the above module doesn't match the current version of Python, which is: 2.4 (#1, Dec 16 2012, 09:16:56) [GCC 4.1.2 20080704 (Red Hat 4.1.2-52)] If you cannot solve this problem yourself, please go to the yum faq at: http://wiki.linux.duke.edu/YumFaq I checked http://wiki.linux.duke.edu/YumFaq and it said the following: If you are getting a message that yum itself is the missing module then you probably installed it incorreclty (or installed the source rpm using make/make install). If possible, find a prebuilt rpm that will work for your system like one from Fedora or CentOS. Or, you can download the srpm and do a rpmbuild --rebuild yum*.src.rpm I tried going to http://rpm.pbone.net/index.php3/stat/4/idpl/17838875/dir/centos_5/com/python-2.4.3-46.el5.x86_64.rpm.html to install Python, which resulted in the following error: [root@cerulean-OW1 ~]# rpm -Uvh python-2.4.3-46.el5.x86_64.rpm error: Failed dependencies: python-libs-x86_64 = 2.4.3-46.el5 is needed by python-2.4.3-46.el5.x86_64 So I tried installing python-libs-x86_64, which resulted in the following: [root@cerulean-OW1 ~]# rpm -Uvh python-libs-2.4.3-46.el5_8.2.x86_64.rpm warning: python-libs-2.4.3-46.el5_8.2.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 192a7d7d Preparing... ########################################### [100%] package python-libs-2.4.3-46.el5_8.2.x86_64 is already installed file /usr/lib64/libpython2.4.so.1.0 from install of python-libs-2.4.3-46.el5_8.2.x86_64 conflicts with file from package python-libs-2.4.3-46.el5_8.2.x86_64

    Read the article

  • Skydrive for desktop not showing icon in notification area

    - by jeruntime
    is this normal behavior? I'm concerned that some files will not be transferred. For example. I need to be sure that its safe to put my laptop to sleep so that later when I'm on my desktop I wont have to open up my laptop and sync again if I closed it too early. It doesn't even appear when I THINK it is syncing. Thanks for any help in advance. edit 1: Yeah I know the app isn't required or even offered anymore, but what I was wondering was if they had some way to tell if it's syncing or not. The old desktop app had an icon in the notification area which had a animation for when it was syncing, but now there isn't anything there. Although when I try to go in and make it appear it says the application isn't active and the icon will show the next time it is. Its version is 17.0.2015.0811 Is it possible to force the icon to show in the notification area if it isn't designed to show up there at all? I basically just want the same functionality of Skydrive on windows 8.

    Read the article

  • How do you handle data archiving?

    - by 20th Century Boy
    Backups are one thing, but long term archival is another. For example, you might be required to store emails for 7 years, or keep all project data indefinitely. I used to save archives to tape, but then I've had tapes get destroyed (drives rip the tape out). So...write to 2 tapes I hear you say. Is that what others do? Have 2 (or more) tapes of the same data for redundancy? But then the other issue is that tapes cannot usually be read by different backup software vendors. Eg if you go from Arcserve - Backup Exec - Commvault over 10 years you would need to keep all 3 systems so that you could restore old data. Likewise for hardware. Old tapes might not be barcoded. Might not be compatible with the new library etc etc. So do you keep old tape hardware AND old software just in case you might need to restore a 10 year-old file? Or...when you move to a new backup system do you migrate all archived data to the new system and re-archive it onto new tapes? That could be a huge job. Any thoughts?

    Read the article

  • Kubuntu: apt-get install of php5-dev: libtool version mismatch?

    - by pinkgothic
    (Warning, clueless-newbism ahead.) Background info: I'm actually trying to install/upgrade xdebug. sudo pecl install xdebug yields: downloading xdebug-2.0.5.tgz ... Starting to download xdebug-2.0.5.tgz (289,234 bytes) ............................................................done: 289,234 bytes 67 source files, building running: phpize sh: phpize: not found ERROR: `phpize' failed A quick google tells me that phpize is a part of a package called php5-dev, so off I ran to install that. My problem is that using sudo apt-get install php5-dev fails with this output: sudo apt-get install php5-dev Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: php5-dev: Conflicts: libtool (>= 2.2) but 2.2.6a-4 is to be installed E: Broken packages 2.2.6a-4 is greater than 2.2, so I'm not sure why it's hanging itself up at that point. I'm guessing the fact that it's not entirely numeric is throwing apt-get off? I can probably install xdebug manually (though I've never done this before, so picture me with a deer clueless-newb in headlights look here, violently shaking my head and begging for a simpler solution) rather than via pecl / aptitude, but is there a way I can make aptitude install php5-dev despite the bogus 'broken package' claim? Is it even bogus, or am I misreading the error message? Alternatively: Could I install phpize in some other way (e.g. via pear or pecl)?

    Read the article

  • Kubuntu: apt-get install of php5-dev: libtool version mismatch?

    - by pinkgothic
    (Warning, clueless-newbism ahead.) Background info: I'm actually trying to install/upgrade xdebug. sudo pecl install xdebug yields: downloading xdebug-2.0.5.tgz ... Starting to download xdebug-2.0.5.tgz (289,234 bytes) ............................................................done: 289,234 bytes 67 source files, building running: phpize sh: phpize: not found ERROR: `phpize' failed A quick google tells me that phpize is a part of a package called php5-dev, so off I ran to install that. My problem is that using sudo apt-get install php5-dev fails with this output: sudo apt-get install php5-dev Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: php5-dev: Conflicts: libtool (>= 2.2) but 2.2.6a-4 is to be installed E: Broken packages 2.2.6a-4 is greater than 2.2, so I'm not sure why it's hanging itself up at that point. I'm guessing the fact that it's not entirely numeric is throwing apt-get off? I can probably install xdebug manually (though I've never done this before, so picture me with a deer clueless-newb in headlights look here, violently shaking my head and begging for a simpler solution) rather than via pecl / aptitude, but is there a way I can make aptitude install php5-dev despite the bogus 'broken package' claim? Is it even bogus, or am I misreading the error message? Alternatively: Could I install phpize in some other way (e.g. via pear or pecl)?

    Read the article

  • Checkpoint VPN-1 R60 and Windows 7 64 Bit Client

    - by Mohit
    As per my knowledge of checkpoint VPN-1. My company is using checkpoint VPN-1 R 60 ( I guess as I dont know how to check server version) Firewall(VPN Server). Now the problem is that I installed Windows 7 64 bit. But, after my research I found that there are not even one client (SecuRemote/SecuClient) for Win7 64 bit, when Firewall or server is R60. I thought of some open source solutions. Can you guys please suggest me some with the configuration required. As of now, I know the IP of the server. I know my username and password using which I connect and that is not my domain password. that i can confirm to you guys. I am not a network guy. I am more of a developer. But, I need some help in this.So, let me know if I can provide you more details. Please please i need urgent help on this.

    Read the article

  • Password Manager that allows syncing accross platforms

    - by lexu
    I use OS X, Linux, Solaris and windows for work and from home. There are good tools that allow me to manage the many logins/passwords required platform independently. But mostly they expect me to carry a thumb-drive around or require direct access to a central location (a sky drive in the cloud). The thumb-drive is too easily lost (= synchronized backup needed), the central location not always reachable/ mountable. Besides company policy rightly prevents this often. Is there a tool that allows me to add passwords locally and then syncs it's DB with the "mother-ship" later. Or is there another approach that you use, that solves my problem? EDIT My question is more about "synchronize" than cross platform. I've evaluated (=read feature list) some good cross platform tools, but need one that does the synchronizing for me. By synchronize I mean "merge two versions" not "replace (hopefully) old file with new." I'm not sure I'm always disciplined/awake enough to prevent data loss. UPDATE Lifehacker just posted that AgileSolutions now have a beta version of 1Password for Windows.

    Read the article

  • How do I configure nVidia drivers on a Portable Ubuntu setup?

    - by Nicholas Flynt
    I've been pulling my hair out over this one for a couple of days now, google is no help. I've created a wonderful (until this issue) portable copy of Ubuntu linux that will boot on mostly anything by using a USB enclosure for my laptop's 80GB SATA drive. So far so good, it boots and runs on everything, and on non-nVidia card setups was even detecting the drivers, or letting me install the required drivers for hardware acceleration and compiz. Because you know, the wobble windows are the most awesome thing ever. Anyway, my desktop machine had an nVidia card, so I'm thinking, sure, I'll just install the nVidia drivers like before and everything will work happily. Not so-- now the desktop and any other nVidia cards work great, but it seems to have completely disabled any other graphics cards. When the kernel module detects that an nVidia card isn't present, it shoots up this nasty little dialog box giving me the option to boot into "low graphics" mode, which doesn't even allow me to use the correct screen resolution, much less see the installed graphics card and try to configure a driver for it. Is there any way to configure Ubuntu (with the dreaded nVidia kernel module) so that it can use nVidia's drivers when an nVidia card is present, and default to the normal (not low-graphics) setup in other cases, so that it has a fair chance of using what's actually present? I'm not afraid to much with config files, I just don't know the underlying system well enough to feel comfortable diving in without a push in the right direction. Thanks guys!

    Read the article

  • Automated software installation for MS Windows?

    - by Duncan Bayne
    I am currently setting up a Windows development environment (the whole Visual Studio 2010 stack plus plugins on top of Windows 7). This has got me wondering whether there's a Windows equivalent to what I do for dev environment setup in Ubuntu. It takes literally hours to get a dev environment set up in Windows, involving a lot of manual intervention. On Ubuntu, I have two shell scripts - one I run as root which configures the system using apt-get (amongst other things), one I run as me which configures my user account. Those scripts live in my private Subversion repository. To set up a dev environment from scratch requires five commands: sudo apt-get install -y subversion svn co http://svn.XXXX.XXX/personal/ cd personal sudo ./ubuntu_setup_root.sh ./ubuntu_setup_user.sh The only human intervention required is to pick a root password for MySQL. So it takes only a few minutes of human attention to go from a vanilla Ubuntu installation to a full development environment with the latest builds of everything, perfectly tailored down to shortcut keys and wallpaper. Is there an equivalent process for Windows? In an ideal world it'd be something trivially scriptable using C# Script or Powershell, which could live in source control & make use of a repository of ISOs downloaded from MSDN ...

    Read the article

  • Development on Windows 7; Web server on Linux - How to share Apache web root?

    - by TheKeys
    I've got a LAMP server that I want to use as a local web server. I've got a Windows 7 machine that I want to use as my development machine. The machines will be on the same LAN (or the Windows box will be VPNed into the LAN). My questions is, what is the best way of sharing the web root of the LAMP server so that I can edit the files on the remote Windows 7 machine and how do I go about configuring this on the Linux machine? (Fedora 16) I would like the solution to be as easy to use as possible with preferably no extra steps required to save/edit/upload files from my IDE on my Windows 7 machine. I'm thinking either a Samba or NFS share are the way to go but I'm concerned I'm going to run into issues with permissions and unix/windows file handling. Is one better than ther other for my use case or is there a better alternative solution? I'm currently using Windows 7 Professional which doesn't have NFS support but would upgrade to Ultimate which does have NFS support if it's the best solution.

    Read the article

< Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >