Search Results

Search found 9279 results on 372 pages for 'job queue'.

Page 307/372 | < Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >

  • Scripted redirection for Outlook 2003

    - by John Gardeniers
    We have a staff member in sales who has gone onto a 4 day week (getting ready for retirement), so each Thursday afternoon her email needs to be forwarded to another user and each Friday afternoon it needs to be set back. I'm using the VBS script below to do this, run via the Task Scheduler. Although the script appears to do it's job, based on what I see when I view the user's Exchange settings, Exchange doesn't always recognise that the setting has changed. e.g. Last Thursday the forwarding was a enabled and worked correctly. On Friday the script did it's thing to clear the forwarding but Exchange continued to forward messages all weekend. I found that I can force Exchange to honour the changed setting be merely opening and closing the user's properties in ADUC. Of course I don't want to have to do that. Is there a non-manual way I can have Exchange read and honour the setting? The script (VBS): ' Call this script with the following parameters: ' ' SrcUser - The logon ID of the suer who's account is to be modified ' DstUser - The logon account of the person to who mail is to be forwarded ' Use "reset" to clear the email forwarding SrcUser = WScript.Arguments.Item(0) DstUser = WScript.Arguments.Item(1) SourceUser = SearchDistinguishedName(SrcUser) 'The user login name Set objUser = GetObject("LDAP://" & SourceUser) If DstUser = "reset" then objUser.PutEx 1, "altRecipient", "" Else ForwardTo = SearchDistinguishedName(DstUser)' The contact common name objUser.Put "AltRecipient", ForwardTo End If objUser.SetInfo Public Function SearchDistinguishedName(ByVal vSAN) Dim oRootDSE, oConnection, oCommand, oRecordSet Set oRootDSE = GetObject("LDAP://rootDSE") Set oConnection = CreateObject("ADODB.Connection") oConnection.Open "Provider=ADsDSOObject;" Set oCommand = CreateObject("ADODB.Command") oCommand.ActiveConnection = oConnection oCommand.CommandText = "<LDAP://" & oRootDSE.get("defaultNamingContext") & ">;(&(objectCategory=User)(samAccountName=" & vSAN & "));distinguishedName;subtree" Set oRecordSet = oCommand.Execute On Error Resume Next SearchDistinguishedName = oRecordSet.Fields("DistinguishedName") On Error GoTo 0 oConnection.Close Set oRecordSet = Nothing Set oCommand = Nothing Set oConnection = Nothing Set oRootDSE = Nothing End Function

    Read the article

  • Multi-partition USB stick

    - by nightcracker
    In my freelance job as "the dude that fixes your computer" I have an extremely handy tool, a bootable USB stick with Ubuntu LiveCD that allows me to recover and investigate in a known, working environment. Now, I want to reformat this USB stick and reinstall with Casper-RW persistance. I did this a few times before with a FAT-formatted USB stick. It was a horror. The USB drive corrupted constantly, by people accidently removing the USB stick, the computer not properly shutting down, ETC. Now what I want to create a multi-partition USB stick so I can put Ubuntu on a ext partition, but still be able to store some Windows stuff in it, by having a secondary FAT partition. However I read somewhere that Windows will only check the first partition on USB sticks, giving a problem with the first bootable linux partition. Is this possible on some way? EDIT Perhaps it wasn't clear what the problem is. The problem is that I read somewhere that Windows will only recognize the first partition on a USB stick. But I want two partitions, a ext partition and a FAT partition. No issues so far, but in order to be bootable the ext partition must be the first one!

    Read the article

  • Zero downtime deployment (Tomcat), Nginx or HAProxy, behind hardware LB - how to "starve" old server?

    - by alexeypro
    Currently we have the following setup. Hardware Load Balancer (LB) Box A running Tomcat on 8080 (TA) Box B running Tomcat on 8080 (TB) TA and TB are running behind LB. For now it's pretty complicated and manual job to take Box A or Box B out of LB to do the zero downtime deployment. I am thinking to do something like this: Hardware Load Balancer (LB) Box A running Nginx on 8080 (NA) Box A running Tomcat on 8081 (TA1) Box A running Tomcat on 8082 (TA2) Box B running Nginx on 8080 (NB) Box B running Tomcat on 8081 (TB1) Box B running Tomcat on 8082 (TB2) Basically LB will be directing traffic between NA and NB now. On each of Nginx's we'll have TA1, TA2 and TB1, TB2 configured as upstream servers. Once one of the upstreams's healthcheck page is unresponsive (shutdown) the traffic goes to another one (HttpHealthcheckModule module on Nginx). So the deploy process is simple. Say, TA1 is active with version 0.1 of the app. Healthcheck on TA1 is OK. We start TA2 with Healthcheck on it as ERROR. So Nginx is not talking to it. We deploy app version 0.2 to TA2. Make sure it works. Now, we switch the Healthcheck on TA2 to OK, switch Healthcheck to TA1 to ERROR. Nginx will start serving TA2, and will remove TA1 out of rotation. Done! And now same with the other box. While it sounds all cool and nice, how do we "starve" the Nginx? Say we have pending connections, some users on TA1. If we just turn it off, sessions will break (we have cookie-based sessions). Not good. Any way to starve traffic to one of the upstream servers with Nginx? Thanks!

    Read the article

  • Autossh startup on Ubuntu 10.04 - fails after powering off

    - by grant
    I'm using upstart to keep a reverse ssh tunnel alive using auto ssh similar to Using Upstart to Manage AutoSSH Reverse Tunnel. This works fine, except after a manual power down I can no longer connect to the machine through the "central server" using the tunnel. I receive "ssh_exchange_identification: Connection closed by remote host". The autossh process is running on the client. I can connect again after re-starting networking. I'm trying to figure out why this is failing consistently after a manual shutdown. Is it possible that I need to do some cleanup on startup that would allow the tunnel to work in this situation, or are there some other debugging/troubleshooting steps I can take to determine the problem? Machine A is the client machine, using autossh. This machine sits behind a firewall and uses the following command in upstart to create an ssh tunnel: /usr/bin/autossh -fN -i /keyfile -o StrictHostKeyChecking=no -R 20098:localhost:22 user@centralserver Machine B we'll call the "central server", which sits in the cloud and is the host. This machine is "centralserver" in the command above. When Machine A is hard powered off, and back on, I cannot connect to it by SSH'ing from my machine (C) to Machine B in the cloud, then using the following command to get to Machine A: ssh -p 2098 user@localhost Again, after a reboot of the client (A), this works fine. It is only after a hard power down that the problem occurs. There are autossh processes that are running on the client machine (A) after powering down and back up, but they just don't seem to doing their job.

    Read the article

  • Role of MBR in the booting process

    - by pg4421
    I am new to stack overflow. So please correct me if my question seems irrelevant or stupid. I read here in Booting Process : The job of the primary boot loader is to find and load the secondary boot loader (stage 2). It does this by looking through the partition table for an active partition. When it finds an active partition, it scans the remaining partitions in the table to ensure that they're all inactive. When this is verified, the active partition's boot record is read from the device into RAM and executed. The question is that I am having a Hard disk which has two Operating System images windows and ubuntu and hence both partitions in which they reside are active. Then why do we have only one active partition always? (I know that active partition is one of the primary partition but then why we are giving special reference to one primary partition? ) I am confused a bit. Please solve my query. Thank you so much.

    Read the article

  • File corruption (bad checksums) in large files copied to VMware guest

    - by AllanA
    In setting up a development lab, I've got a desktop system running ESXi 4.1.0 (free license) on SATA RAID 0 (already purchased and configured when I started this job; I'm open to hardware input as it pertains to my problem.) Its guests so far include two Win2008 Server R2 64-bit VMs and on Ubuntu 10.04 64-bit VM. I'm installing onto the Windows servers. We've been copying off some fairly large files (over a gigabyte) for an installation, hoping to install more quickly from a (virtual) hard drive than from the network for from BD-ROM. The problem is that they keep coming up with different checksums from the originals. The file sizes are the same, but md5sum reports different numbers (and so does the installer, as it refuses to continue when the checksums don't match.) I've tried copying directly from the BD-ROM (attaching the OS drive to the host system's physical drive). I've tried copying the large files onto a co-worker's Windows machine from his Blu-Ray drive; when I do that, the checksums match. But when I copy from his machine to the VM guest over a network share, the checksums no longer match. Thinking this meant a corrupt destination drive, I deleted it in vSphere and added another freshly created drive. The problem persists. I'm not sure what to try next.

    Read the article

  • How to change the mail domain server so it's not displaying IP? Changing [email protected] to [email protected]

    - by Pavel
    Hi guys. I'm kinda a noob as a server admin so please bear with me. I've installed postfix mail server and everything is working fine but the 'from' box is displaying [email protected]. I want to set it up so it displays domainname.com instead of IP. I just hope you know what I mean. My main.cf in postfix folder looks like this: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = mail.thevinylfactory alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = mail.thevinylfactory.com, thevinylfactory, localhost.localdomain, localhost relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all Can anyone help me with this one? If you need any more details please let me know. Thanks in advance!

    Read the article

  • Limited bandwidth and transfer rates per user.

    - by Cx03
    I searched for a while but couldn't find anything concrete, hopefully someone can help me. I'm going to be running a Debian server on a gigabit port, and want to give each user his/her fair share of internet access. The first objective is easy - transfer rates (speed) per user. From what I've looked at, IPTables/Shorewall could do the job easy. Is this easy to setup, or could one of you point me at a config? I was hoping to limit users at 300mbit or 650mbit each. The second objective gets complicated. Due to the usage of the boxes, most of the traffic will be internal network traffic that does NOT get counted to the quota. However, I still need to limit the external traffic, and if they go over, cut off access (or throttle traffic to a very low speed (10mbit?)). Let's say the user has a 3TB external traffic limit. The IF part is: If the hostname they are exchanging the traffic with DOES NOT MATCH .ovh. or .kimsufi. (company owns multiple TLDs), count to the quota. Once said quota exceeds 3TB, choke them. Where could I find a system to count that for me? It would also need to reset or be able to be manually reset on a monthly basis. Thanks ahead of time!

    Read the article

  • Dual Monitor + Virtual Desktop software (plus for cube)

    - by xenithorb
    I've recently purchased another monitor, my first one being a TV and being much larger. I now sit at a desk and use my shiny new 24" LED more often, but I like to extend the desktop into the TV. The problem presented with this is to save power and the longevity of my 47" VIZIO, I try to keep it off when possible. What I'm seeking sounds very simple - If any of you have ever used Compiz or Deskspace (Yod'm) - You'll know what im referring to when I talk about a "cube." The most important functionality I'm looking for is the ability to scroll desktop contents between both displays and virtual desktops. Deskspace does and excellent job of presenting an attractive cube, but it creates a separate cube and virtual desktop space for the second extended monitor (now the TV) - Again, what I'm looking to do is scroll between virtual desktops, by passing through both monitors. The net effect of this functionality would allow me to scroll the contents of the extended monitor to the first monitor should a window get caught there without having to turn on the TV. So imagine the horizontal portion of a cube as being actual real monitors - is there anything that allows one to rotate desktops between displays?

    Read the article

  • I cant figure out my PHP problem. Can anyone with PHP codes? [closed]

    - by Jeffery
    when I click the submit button it gives me an error page. Here is the site http://nealconstruction.com/estimate.html $emailSubject = 'Estimate' $webMaster = '[email protected]' /* Gathering Info */ $emailField = $_POST ['email']; $nameField = $_POST ['name']; $phoneField = $_POST ['phone']; $typeField = $_POST ['type']; $locationField = $_POST ['location']; $infoField = $_POST ['info']; $contactField = $_POST ['contact']; $body = <<<EOD Email: $email Name: $name Phone Number: $phone Type Of Job: $type Location: $location Additional Info: $info How to Contact: $contact EOD; $headers = "From: $email\r\n"; $headers .= "Content-Type: text/html\r\n"; $success = mail($webMaster; $emailSubject; $body; $headers); /* Results rendered as html */ $theResults = << JakesWorks - travel made easy-Homepage Thank you for your information! We will contact you very soon! EOD; echo "$theResults"; ?

    Read the article

  • Specific issue on data pump API in oracle

    - by Median Hilal
    I have a client/server architecture. Using an Oracle dbms on the database server side. I need to perform a user-triggered (from client side) backup of the database, where the best way to perform that is using a stored procedure on the server side which the client may call, as the client has no oracle tools to perform the backup. I've searched thorough inside available solutions and have found that using a stored procedure is the best way. Well, then I found that using oracle data pump API is the best way to use inside a PL/SQl stored procedure. My specific questions about the API are... I would like to ask about two issues ... ---- The first ----- the detach function to detach the handler, is it necessary to be used at the end of the procedure? and what if I don't use it? I read the Oracle documentation but I didn't get their point, they say it doesn't terminate the job but indicates that the user is not interested in it, an when I use detach at the end of my procedure the exported .dmp file disappears. ---- The second ----- to perform a user (client side) triggered back up as the modification are only to the data, I used TABLE parameter for the export operation. But the version parameter... what should it be? I also read the documentation but couldn't determine what I need (LATEST or COMPATIBLE) ? Thanks

    Read the article

  • mod_rewrite with AJAX applictions: possible?

    - by MrJackV
    I am trying to run Shell In a Box (link) through another server (the computer running shellinabox is not accessible from the internet) . Ideally I could use ProxyPass in the Apache config to have a reverse proxy. Problem is I can't access the conf file. So I tried using .htaccess and I discover that I cannot use ProxyPass in there. So I tried and used mod_rewrite to do the job. Currently I have the following on the .htaccess file RewriteEngine On RewriteRule ^$ http://10.1.13.236:4200/ [P] However while it displays the title correctly and if I open up the source code I can see there is something in the page, nothing is diplayed on the screen (it remains blank). My suspicion is that there are problems with AJAX and this kind of proxy. What I am trying to accomplish with the mod_rewrite as close as possible behaviour to ProxyPass (Mirorr a website in a subdirectory). Is this possible? Is there some other solution (I tried phproxy and khproxy but neither of them is able to display anything)? Thanks in advance

    Read the article

  • Ways of file copy

    - by Tim
    I sometimes found that when using simple right-click and copy-and-paste, some files/directories are not copied completely or not at all, because of various reasons, such as some saved webpage files/directories have some strange characters in their names or their names are too long. For example, in Windows 7, I save this webpage http://www.howtogeek.com/howto/windows-vista/working-around-windows-vistas-shrink-volume-inadequacy-problems/ completely in a very deep directories whose parent directories may have long names, I cannot copy its top ancestry directory, as Windows complains the filename for the saved webpage directory is too long. In Ubuntu, sometimes I can save a file with some special character such as newline under some directory. But when I copy that directory, it will say the file name has some special character and I will have to manually remove the character. Such cases are complained in both Windows and Ubuntu. I was wondering what some better ways to accomplish the copy job in both Windows and Ubuntu. For example, will archiving all to be copied into a single archive help? If yes how to do that? Thanks and regards!

    Read the article

  • Advice for UPC/Surge Protector in home office

    - by user37755
    I'm just starting out as an independant developer, mostly Unix stuff with some Windows thrown in occasionally. I've been running two machines, a linux and a windows dev machine. Long story short, we had a bad storm come through last week and I unplugged one machine, forgot to unplug the other and the p/s and mobo ended up dead. Luckily I backup to an external service religiously (rsync.net for anyone interested), so there was no loss in data, but it did show me a glaring hole in my current setup, namely, lack of UPC and Surge Protection (this has honestly never been an issue before). Can anyone recommend a UPC/Surge Protector for a home office? It only needs to support a single machine (I opted to use vmware instead of rebuilding that machine), but it's a quad core Phenom 2 with a 1k watt p/s. This is outside my experience so I thought I'd get some input from others. I'm looking for something that's reasonably priced and does the job reasonably well. I don't need absolute 100% uptime, just something to protect my PC better than it is now.

    Read the article

  • Setup Apache with IPv6

    - by mrz
    I have two virtual machine on my computer. I have Apache installed on one of them (would be referred to as "server" after this), and I have set the Apache to listen to an IPv6. And when I enter the IPv6 into web browser on the "server" I see my index.html file. so far so good ... I want to be able to open a web browser on the other virtual machine("client") and see the index.html. But, when I try entering the IPv6 of the "server" in a web browser on the "client" I get an Unable to establish a connection to the server error. I can ping6 "client" from "server" and vice verse. There is only one thing to mention. ifconfig of the "server" shows 3 different IPv6 which two of them are scoped Global and there is one Link scope IPv6. On the "client" there is only one Link scope IPv6 though. I only can ping the Link IPv6s. Pinging other IPv6s would result connect:Network is unreachable. And if I set Apache to listen to Link IPv6, The rcapache2 start will fail the job. Any thoughts on what I am probably missing/doing wrong?

    Read the article

  • RSS "Newspaper" / Google Reader replacement

    - by Sean D
    With the impending demise of Google Reader I've been looking at ways to replace it. I've decided that what might be cool is to get an email every morning, with all the updates from the last twenty-four hours, maybe in the style of a newspaper. That's not a very original idea, since sites like http://fivefilters.org/pdf-newspaper/ and http://feedjournal.com/ already do this, but they both have various drawbacks. In particular both require a single feed, will just take the last n items, and clicking around on their website. The Pro option for feedjournal seems almost like it would do the job, but the project seems to be dead, and there's no way to buy it. Before I hack together something crazy I'd like to know if there's a better solution to my problem. In short: I want to replace Google Reader with a daily pdf email, how should I do this? edit: I didn't award the bounty because nobody solved the problem (not that I'm assuming it has a solution). Answers like "well for the way I do things this wouldn't work" aren't actually helpful, even if they are well-meaning.

    Read the article

  • cause for mysql crash

    - by user1322092
    A cron job automatically restarted my mysql database. What's the cause for the crash, or can you suggest how to resolve or monitor. I would REALLY appreciate your input. 120715 14:38:58 mysqld started 120715 14:38:58 InnoDB: Started; log sequence number 0 411137570 120715 14:38:58 [Note] /usr/libexec/mysqld: ready for connections. Version: '5.0.95' socket: '/var/lib/mysql/mysql.sock' port: 3306 Source distribution 120715 15:14:21 [Note] /usr/libexec/mysqld: Normal shutdown 120715 15:14:23 InnoDB: Starting shutdown... 120715 15:14:25 InnoDB: Shutdown completed; log sequence number 0 411166467 120715 15:14:25 [Note] /usr/libexec/mysqld: Shutdown complete 120715 08:14:25 mysqld ended 120715 08:14:26 mysqld started 120715 8:14:26 InnoDB: Started; log sequence number 0 411166467 120715 8:14:26 [Note] /usr/libexec/mysqld: ready for connections. Version: '5.0.95' socket: '/var/lib/mysql/mysql.sock' port: 3306 Source distribution 121212 09:15:32 mysqld started InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 121212 9:15:58 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 121212 9:17:28 InnoDB: Started; log sequence number 0 554145193 121212 9:17:57 [Note] /usr/libexec/mysqld: ready for connections. Version: '5.0.95' socket: '/var/lib/mysql/mysql.sock' port: 3306 Source distribution

    Read the article

  • Linux Has Become Very Slow Dealing With Large Data

    - by Kohjah Breese
    Last year I bought a computer, for around $1,800, so it is relatively high-end. When I first got it I was particularly pleased at how quick it dealt with large MySQL queries, imports and exports. But somewhere along the way something has gone wrong and I am not sure how to diagnose the problem. Any job that involves processing large amounts of data, e.g. gzipping file c. 1GB+, UPDATEs on large MySQL tables etc. have become very slow. I just performed an intensive alter statement on a 240,000,000 row table on a remote server, which is lower spec. This took about 10 minutes. However, performing the same query on a 167,000,000 row table on my computer went fine until it hit 860MB. Now it is only writing about 1MB every 15 seconds. Does anyone have any advice as to debugging what the issue is? I am using LinuxMint (based on Ubuntu 12.04.) The home partition is encrypted, which really slows down gzip. I have noticed the swap is barely used, but am not sure if that is because there is more than enough RAM. The filesystem is ext4. The MySQL server is on a separate hard drive, but it was fine when I first installed it. Other than the above issues, there are no other problems with it. I am going to install a fresh Ubuntu on the 4th hard drive to see if that is any different.

    Read the article

  • ZFS & Deduplicating FLAC Data

    - by jasongullickson
    I'm experimenting with using ZFS to deduplicate a large library of FLAC files. The purpose of this is twofold: Reduce storage utilization Reduce bandwidth needed to sync the library with cloud storage Many of these files are of the same music tracks but from different physical media. This means that for the most part they are the same and usually close to the same size, which makes me think that they should benefit from block-level deduplication. However in my testing I'm not seeing good results. When I create a pool and add three of these tracks (identical songs from different source media) zpool list reports 1.00 dedupe. If I copy all of the files (make exact duplicates of the three) dedupe climbs, so I know that it is enabled and functioning, but it's not finding any duplication in the original collection of files. My first thought was that perhaps some of the variable header data (metadata tags, etc.) might be mis-aligning the bulk of the data in these files (the audio frames) but even making the header data consistent across the three files doesn't seem to have any impact on deduplication. I'm considering taking alternate routes (testing other dedupe filesystems as well as some custom code) but since we're already using ZFS and I like the ZFS replication options, I'd prefer to use ZFS dedupe for this project; but perhaps it's simply not capable of working well with this sort of data. Any feedback regarding tuning that might improve dedupe performance for this sort of dataset, or confirmation that ZFS dedupe is not the right tool for this job are appreciated.

    Read the article

  • Can't upgrade MySQL Server on new Ubuntu 12.04 install

    - by user179627
    After freshly installing Ubuntu server 12.04, I did the usual apt-get update / apt-get upgrade, which failed for mysql-server-5.5: Setting up mysql-server-5.5 (5.5.31-0ubuntu0.12.04.2) ... start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.5; however: Package mysql-server-5.5 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured I tried a wide variety a approaches suggested by googling, which involved various combinations of apt-get remove/purge/install -f/reinstall, etc., with no luck. I also tried downloading the package directly from launchpad.net and running dpkg -i on it (this had worked for a similar issue with a kernel upgrade), but to no avail. I'm not actually particularly interested in what's going on with mysql, per se (though I will need to figure it out at some time); at this point, my primary concern is that I am unable to apt-get install other packages! What to do?

    Read the article

  • I have to manually change the DNS suffix order every time I connect to VPN. Can I change this permanently or fix the problem somehow?

    - by CarlB
    Sorry in advance but I'm a programmer, not a network engineer, so I'm a noob at this stuff. Anyway, when I am not connected to VPN from my work PC at home, I have the following DNS suffixes listed (real domain names substituted): enterprise.org network.org company.com us.enterprise.org After connecting to VPN, one more DNS suffix is added to the very top of the list: problem-domain.com At this point, most network functions that I can normally perform when actually connected to the LAN in the office are unusable. I get error messages about the network paths not being found and what-not. Anyway, I played around with the suffixes and realized that if I just moved problem-domain.com down one spot to the second in the list, all the problems went away. Unfortunately, it returns to the top spot every time I reconnect, and I tend to get disconnected frequently. Is there something else I can do about this or should I just contact the IT department? I've had this problem before and they weren't able to resolve it but I suppose it would be worth trying again if I could get a different person on the job. What I don't understand is that I thought it didn't matter what order the suffixes were in? Isn't Windows supposed to go through each suffix until it finds a match (or has gone through all the suffixes)? Why is it quitting after the first one? Thanks in advance.

    Read the article

  • 1 PC, 2 consoles (as in 2 monitors, keyboards and mice)

    - by ciuly
    I have this desire to "kill 2 birds with one shot". Currently, I have 1 server running round the clock, 1 laptop that runs about 8 hours a day, 7 days a week, and a desktop that runs about the same length of time. All 3 are ... old, to say the least. So there is a great need to upgrade (well, the server might handle its job for another year or so, but that only depends on how much time I have to put it to "work"). Now, I'm "dreaming" of only one PC. I'm thinking vmware's ESX. So there will be a VM for the server, a VM for the "laptop" and one for the "desktop". And obviously I'll have to somehow "link" a set of monitor/keyboard/mouse with one of the laptop/desktop VMs. The server doesn't need such things, obviously (it doesn't have them at this moment either). Is something like this possible? ESX is not a requirement, it's just something I found that answers part of my problems, but there still remains the 2 KVM set that needs connecting and "linking" to appropriate VM. Why I would want to do this? well, first of all, it's much cheaper to upgrade one PC than 3. Then, the power consumption is obviously lower. Plus the extra space.Plus it allows me to better separate networks and services. Thanks.

    Read the article

  • Can my employer force me to backup my personal machine? [closed]

    - by Eric B
    Here's the background: Approximately 1.25 years ago, the company I work for was acquired by a larger 400 person company. Before acquisition (and today still) we are all remote employees using our own personal hardware for work-related duties (coding, email, etc). We are approximately 15 employees within the larger organization. Some time after acquisition, the now owning company was slapped with a civil lawsuit. Part of this lawsuit (discovery) is requiring them to retrieve & store from us any related information. Because we were a separate company up until acquisition, there is a high probability that our personal machines might contain information about what the lawsuit alleges (email, documents, chat logs?, etc). Obviously, this depends largely on the person's job function (engineer vs. customer support vs. CEO). All employees are being required to comply. Since acquisition (1.25 yrs), the new company has not provided us with company laptops/desktops. We continue to use personal hardware, licenses, etc for work. Email is via POP3s and not hanging around on the mail server - it's on everyone's client. Documents are spread across personal machines. So, now they want us each to backup our complete personal machines. They are allowing us to create a "personal" folder where we can place personal documents. That single folder will be excluded from backup. Of course, that means total re-arrangement of documents, etc. For most of us, 99% of the data on the machine is NOT related to work. So, what's the consensus? Should we comply? What is their recourse if we do not?

    Read the article

  • How to configure SVN server for my own project

    - by user1729952
    I work with a team on an Android project using Eclipse IDE, we need to use a version control and we need to access the repo remotely, I have no experience using or installing servers, a little experience using SVN on Windows, but I still have problems connecting to it remotely. I need to use no-ip.com to change my IP, however; I failed to make VisualSVN server to work with no-ip. What options do I have? The best thing is to get it work with Windows if not, I have another computer that is running Ubuntu 12.4.1, I have installed apache2svn on it trying to get it work, the svn is installed, I went through tutorials to configure accessing protocols, but I can't figure out how to access it remotely from another computer? Can someone tell me the steps I need to get this job done and I can do my search for each step? (Please explain each step as some keywords or phrases I may not be familiar with) EDIT: Also worth noting, that my company has a website hosted on a remote server, can we use it as a repo? and how? It's running Linux

    Read the article

  • 1 PC, 2 consoles (as in 2 monitors, keyboards and mice)

    - by ciuly
    I have this desire to "kill 2 birds with one shot". Currently, I have 1 server running round the clock, 1 laptop that runs about 8 hours a day, 7 days a week, and a desktop that runs about the same length of time. All 3 are ... old, to say the least. So there is a great need to upgrade (well, the server might handle its job for another year or so, but that only depends on how much time I have to put it to "work"). Now, I'm "dreaming" of only one PC. I'm thinking vmware's ESX. So there will be a VM for the server, a VM for the "laptop" and one for the "desktop". And obviously I'll have to somehow "link" a set of monitor/keyboard/mouse with one of the laptop/desktop VMs. The server doesn't need such things, obviously (it doesn't have them at this moment either). Is something like this possible? ESX is not a requirement, it's just something I found that answers part of my problems, but there still remains the 2 KVM set that needs connecting and "linking" to appropriate VM. Why I would want to do this? well, first of all, it's much cheaper to upgrade one PC than 3. Then, the power consumption is obviously lower. Plus the extra space.Plus it allows me to better separate networks and services. Thanks.

    Read the article

< Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >