Search Results

Search found 23127 results on 926 pages for 'based'.

Page 698/926 | < Previous Page | 694 695 696 697 698 699 700 701 702 703 704 705  | Next Page >

  • Configuration for Router Behind Uverse 2Wire

    - by Nori
    I have a 2Wire Uverse router(RG). I'm not particularly fond of it and want to use it as a modem only. I have a Linksys router with Tomato firmware on it and want that to be configured as the router. Most of the "guides" I've seen in my searches has been to enable DMZ Plus mode, but I don't see a way to make that work while my router has a static IP address. I played with it for quite some time yesterday and didn't see a way to get it to work. Then I ran across a setting in broadband for configuring another network. I played with that for a little bit but ran out of time and couldn't get it to work. So my question is for anyone out there who has Uverse and successfully setup a Tomato based firmware router behind the RG. How did you get it configured? I'm sure if I continued playing with it I could get it to work, but if someone out there already has it working then that would make my life easier. Thoughts? Thanks.

    Read the article

  • Can't resolve offline file conflicts

    - by Bryan
    We use roaming profiles on our Server 2008 R2 domain, with folder redirection for 'desktop', 'my documents' and 'application data'. But as our network is split across two sites, we have one file server at each site, which are configured to use domain based DFS namespaces and DFS replication to keep things in sync. The DFS path for the replication folder is as follows: \\domain\folderredirection$\<username>\<redirected-folder-name> The real paths are \\site-1-server\folderredirection$\<username>\<redirected-folder-name> and \\site-2-server\folderredirection$\<username>\<redirected-folder-name> As our users all switch between sites (sometimes several time per day), our folder redirection policy has to redirect to the DFS roots rather than hardcoded to a specific server. Both DFS and DFS-R have been proven to be working perfectly. On our laptops, we use offline files for the redirected folders, and this also works fine, however the problem is as follows: When conflicts occur in offline files, it is impossible to resolve the conflicts. I'm given the usual conflict resolution options (i.e. 'Ignore', 'Keep Both', 'Keep network' and 'Keep local'), however, not one of these options will resolve any conflict, yet no error is produced. We only use offline files on laptops, which have either Windows XP Professional or Windows 7 Professional installed. The problem is not specific to any one laptop, it affects every laptop and every conflicting file in exactly the same way. I would have thought the set up we have is common for companies that have multiple sites, so I'm hoping someone will have seen this before?

    Read the article

  • Does fast typing influence fast programming?

    - by Lukasz Lew
    Many young programmers think that their bottleneck is typing speed. After some experience one realizes that it is not the case, you have to think much more than type. At some point my room-mate forced me to turn of the light (he sleeps during the night). I had to learn to touch type and I experienced an actual improvement in programming skill. The most surprising was that the improvement not due to sheer typing speed, but to a change in mindset. I'm less afraid now to try new things and refactor them later if they work well. It's like having a new tool in the bag. Have anyone of you had similar experience? Now I trained a touch typing a little with KTouch. I find auto-generate lessons the best. I can use this program to create new lessons out of text files but it's only verbatim training, not auto-generated based on a language model. Do you know any touch typing program that allows creation of custom, but randomized lessons?

    Read the article

  • Disabled FRS replication on a DFS link, but the targets still list the replica set in their FRS conf

    - by Graeme Donaldson
    It's been a while since I've had to deal with the wonders of FRS, so I'm doing some testing to refresh my memory. This is what I've done so far. I am stuck with FRS rather than DFS-R for the moment since not all of my link targets are running R2. Created a domain-based DFS root, hosted on 4 servers. Created a DFS link under the root, targeted at 2 servers. The shares on both servers were empty. Dropped about 500MB of data into the target folder on one server and waited for replication to complete. Added/removed/modified files on both targets and confirmed that changes are replicated within a few seconds. Deleted the contents of the target folder on 1 server and waited for the other server to replicate the deletion. All of this worked perfectly, so now I want to remove my DFS link since I only created it for testing purposes. This is where it gets weird. I'm pretty sure that in the past I've disabled replication on the DFS link and after a short amount of time each target would log an info event in the FRS event log, something along the lines of "this server is no longer a member of replica set X". I have waited about 3 hours and I haven't seen this happen. ntfrsutl ds tells me that the server is not a member of any set, which is expected because when I disable replication on the link, the AD attributes on the computer object are removed. The weird part is... ntfrsutl sets still shows me the replica set, with all the properties, etc. So it seems like the FRS-related attributes of the target server's AD object are gone, but the FRS service for some reason hasn't removed the replica set. Can anyone see what I have done wrong?

    Read the article

  • File download speed issue over a dedicated fibre link

    - by nixnotwin
    My ISP has installed a fibre based dedicated internet connection at the place where I work. In the beginning the connection terminated at one of the ISP's core routers. It resulted in a strange issue. Eventhough the assigned speed was 5mbps, when tests were done by downloading large files over http and ftp from multiple locations, the speed never went above 2mbps. But bittorrent downloads reached 5mbps. Even file download from the ISP servers were fine. So, at the ISP our link was attached directly to their edge router. After this file downloads from high bandwidth servers, like Google and MS, reached the 5 mbps limit. Sometimes the speed would fall down below 2 mbps and suddenly it will go up to the 5 mbps limit ( it keeps on happening during any single file download). But other downloads like ubuntu apt repositories still struggle to go above 2 mbps. The engineers at the ISP have not been able to sort out the issue. After they moved us to their edge router instead of giving us 8 public ip's, they just gave 4 ip's. When we enquired about it, they told us that giving more ip's would result in arp overload at their edge router. But somehow I was able to convince them to give us the 8 ip's which we wanted. But the file download issue has remained. What might be the reason for files from different location getting downloaded with different speeds, that too with heavy fluctuation in speeds? I have downloaded files from same url's from a connection belonging to another smaller ISP, and the speeds were fine and reached full 5 mbps limit.

    Read the article

  • How do I get "Back to My Mac" (using MobileMe) from Windows?

    - by benzado
    I have a MobileMe subscription and a Mac at home with "Back to My Mac" enabled. When I'm away from home, this service lets me use another Mac to connect to my Mac back home and access file sharing, screen sharing, etc. As far as I know, the service doesn't use any proprietary protocols, so in theory I should also be able to get "Back to My Mac" from a Windows PC. This MacWorld article explains how it works. Basically, it uses Wide-Area Bonjour to give your Mac a domain name like hostname.username.members.mac.com. Remote computers can find your Mac using that address, then connect to it using a private VPN. The "Wide Area Bonjour" part seems to make it a little more complicated than simply a regular domain name, though. Note that I'm not interested in using the methods described by LifeHacker, which doesn't use the MobileMe service at all. I don't want to use a totally different dynamic DNS service. I'd like to use the one I'm already paying for, or at least find out why that's not possible from Windows. Also, my primary problem is finding a network route back to my mac... once I've got that I know how to enable services so that Windows can talk to it. UPDATE: Based on some additional research, it appears that Apple is only assigning IPv6 addresses to the hostname.username.members.mac.com names. So any solution will require enabling IPv6 support on Windows, if possible.

    Read the article

  • My email server is being blocked by Yahoo: TS03 Message permanently deferred.

    - by bilygates
    Hello, My mail server has been getting the following error from Yahoo's mail servers since about a month: postfix/smtp[23791]: host g.mx.mail.yahoo.com[98.137.54.238] refused to talk to me: 421 4.7.1 [TS03] All messages from [my ip] will be permanently deferred; Retrying will NOT succeed. See http:// postmaster.yahoo.com/421-ts03.html I have exchanged about 4 emails with Yahoo's support team. The first three seemed like automated messages, and the 4th told me that there is nothing they can do, but if I change my policies I can send them another email in 6 months. They also told me: However, based on the information you have provided us, we cannot systematically deliver your email to the Inbox at this time. We suggest that you ask your users to set up a filter in Yahoo! Mail to ensure that they get your email messages in their Inbox. The problem is that my email doesn't even get to their Spam folder. The server won't allow any connections. I have never sent spam messages, not even newsletters. I only send emails for my new users so they can activate their account. I've also implemented DKIM and told Yahoo about this. I have checked my configuration with http://www.myiptest.com/staticpages/index.php/DomainKeys-DKIM-SPF-Validator-test and it reports that both SPF and DKIM are set up correctly. What should I do? Basically, I'm losing new users every day. Any help will be appreciated. P.S.: I apologize if this particular question has already been asked. I searched for it but didn't find it.

    Read the article

  • How to install port versions of perl modules for perl5.14 in freebsd 9.0

    - by jm666
    Trying to use perl5.14 on Freebsd with port based p5-modules. uname -impr 9.0-RELEASE amd64 amd64 ALTQ delete all installed ports, start with a clean system # pkg_delete -a # rm -rf /var/db/pkg /var/db/ports /usr/local installing portmaster, checking /etc/make.conf (here is only WITHOUT_X11=YES). Now installing perl. # portmaster -g --force-config lang/perl5.14 # perl -v This is perl 5, version 14, subversion 2 (v5.14.2) built for amd64-freebsd-multi Now perl modules from the ports, # portmaster -g devel/p5-Moose #install Moose and its deps check with pkg_info and got zilion errors like: # pkg_info pkg_info: corrupted record (pkgdep line without argument), ignoring dpendecy check with portmaster - showing dependecies on perl5.12 #portmaster --check-depends Checking p5-Class-C3-0.24 ===>>> lang/perl5.12 is listed as a dependency ===>>> but there is no installed version ===>>> Delete this dependency data? y/n [n] when tried # perl-after-upgrade -f got: Fixed 0 packages (0 files moved, 0 files modified) In short: i got installed Moose into /usr/local/lib/perl5/site_perl/5.14.2/ but all its dependencies into /usr/local/lib/perl5/site_perl/5.12.4/ Yes, it is possible fix this with: # portmaster p5- what reinstall all installed p5-packages once again, now correctly for the 5.14 but it is terrible installing them twice... Questions: What is the correct way install p5-MODULES from ports with installed perl5.14 in an clean system? How to fix wrong dependency data on perl5.12 without the need install and reinstall them again What i'm doing wrong? Ps: know perlbrew and/or Local::lib - but for this case - want port versions.

    Read the article

  • Apache Config: RSA server certificate CommonName (CN) ... NOT match server name?

    - by mmattax
    I'm getting this in error_log when I start Apache: [Tue Mar 09 14:57:02 2010] [notice] mod_python: Creating 4 session mutexes based on 300 max processes and 0 max threads. [Tue Mar 09 14:57:02 2010] [warn] RSA server certificate CommonName (CN) `*.foo.com' does NOT match server name!? [Tue Mar 09 14:57:02 2010] [warn] RSA server certificate CommonName (CN) `www.bar.com' does NOT match server name!? [Tue Mar 09 14:57:02 2010] [notice] Apache configured -- resuming normal operations Child processes then seem to seg fault: [Tue Mar 09 14:57:32 2010] [notice] child pid 3425 exit signal Segmentation fault (11) [Tue Mar 09 14:57:35 2010] [notice] child pid 3433 exit signal Segmentation fault (11) [Tue Mar 09 14:57:36 2010] [notice] child pid 3437 exit signal Segmentation fault (11) Server is RHEL, what's going on and what do I need to do to fix this? EDIT As requested, the dump from httpd -M: Loaded Modules: core_module (static) mpm_prefork_module (static) http_module (static) so_module (static) auth_basic_module (shared) auth_digest_module (shared) authn_file_module (shared) authn_alias_module (shared) authn_anon_module (shared) authn_default_module (shared) authz_host_module (shared) authz_user_module (shared) authz_owner_module (shared) authz_groupfile_module (shared) authz_default_module (shared) include_module (shared) log_config_module (shared) logio_module (shared) env_module (shared) ext_filter_module (shared) mime_magic_module (shared) expires_module (shared) deflate_module (shared) headers_module (shared) usertrack_module (shared) setenvif_module (shared) mime_module (shared) status_module (shared) autoindex_module (shared) info_module (shared) vhost_alias_module (shared) negotiation_module (shared) dir_module (shared) actions_module (shared) speling_module (shared) userdir_module (shared) alias_module (shared) rewrite_module (shared) cache_module (shared) disk_cache_module (shared) file_cache_module (shared) mem_cache_module (shared) cgi_module (shared) perl_module (shared) php5_module (shared) python_module (shared) ssl_module (shared) Syntax OK

    Read the article

  • Can't include Javascript variable in PHP mysql_query call? [on hold]

    - by user198895
    I want the PHP mysql_query call to retrieve user values based on the Agency drop-down value but I can't get this to work. Am I unable to include the Javascript variable agency.value in PHP? <script type="text/javascript"> var agency = document.getElementById("agency"); var user = document.getElementById("user"); agency.onchange = onchange; // change options when agency is changed function onchange() { <?php include 'dbConnect.php'; ?> <?php $q = mysql_query("select id as UserID, CONCAT(LastName, ', ' , FirstName) as UserName from users where Agency = " . ?>agency.value<?php . " order by UserName");?> option_html = "<option value=0 selected>- All Users -</option>"; <?php while ($row1 = mysql_fetch_array($q)) {?> if (agency.value == 0 || agency.value == '<?php echo $row1[AgencyID]; ?>') { option_html += "<option value=<?php echo $row1[UserID]; ?>><?php echo $row1[UserName]; ?></option>"; } <?php } ?> user.innerHTML = option_html; } </script>

    Read the article

  • How expensive to run PC 24/7 or how to figure out how to determine it?

    - by jasondavis
    I realize this question is difficult to answer as it would be different based on users location, what there PC is doing and what hardware it consist of, along with other factors but I am hoping someone could give me a very rough estimate. I have always ran many PC's in my home 24/7 and I am just now looking at it from a money/cost of electric point of view. 1) I live in Central Florida. Can anyone guesstomate/estimate the avaerage monthly or daily cost of running your average PC? Intel quad core processor, 1 SSD drive for OS and programs and 4-5 1-2 TB hard drives in a RAID setup for data. 750watt PSU. What would your guess be? 2) Also is there an accurate way to figure this out (non-super technical and confusing to a non-math person please) Also I have seen those kill-a-watt devices, do they figure this kind of stuff out for you? 3) Does a larger PSU make your PC consume more power? Thanks for any help, you can most likely tell I am somewhat lost about this!

    Read the article

  • Cisco IOS policy route for router originated VPN traffic

    - by Paul
    We have a Cisco IOS router with two DSL connections. One of them is intended for general traffic (ADSL), the other for VPN links (BDSL) and various other traffic. So the default route is the ADSL link, and we have a combination of static routes for the VPN traffic, and policy routes for other traffic types that should go out the BDSL link. For site to site traffic, this is fine, we just static route the public IPs and remote networks out of the BDSL line. The policy based routing works fine for any internal traffic that matches an ACL. The problem is now that there are remote VPN sites originating from dynamic addresses, so we cannot use static routes. The replies to incoming ISAKMP requests are following the default route out of the ADSL (despite there being no crypto map on that interface). I want to route the outgoing VPN traffic out of the BDSL. I have tried adding udp/500 and esp to and from the route-map acl that pushes traffic out of the BDSL line, but it doesn't match, presumably because the route-map happen earlier than the IPSec stuff. Any ideas how I can do this? IOS ver: 12.4.13T.

    Read the article

  • High latency issue for web service call from amazon aws ec2 to local server

    - by SibzTer
    We have a legacy web application that is running in our data center on premises located in Houston. We have a developed a new .net 4 based web application in order to provide new features to customers. The new web application is hosted in amazon aws ec2 environment (N. Virginia region us-east-1b zone). In order to get seamlessly integrate with the legacy application the new web application makes web service calls to retrieve data. We are seeing an unusually high latency time in the order of 5+ seconds for these web service calls. The exact same web service call returns in less than a second on our local PCs (which makes sense given physical proximity to the actual server). The weird part is that we have developers in California who also have the same milliseconds response time. We are testing the web service response using third party tools such as SoapUI, Google Chrome extensions such as Advanced REST Client, Postman REST Client, etc. As if this wasnt weird enough, we have noticed the same low latency from certain other ec2 instances while testing which are in the same region and availability zone as well. If we experienced the high latency consistently from all the ec2 instances I could understand. But there is something else going on. Comparing the various stats and results between the low latency and high latency ec2 servers do not show any significant differences: ping (constant 40ms), tracert, winmtr, etc. We have instances that are in the VPC as well. So I tried both the public and private IP address of the web service host server and that didnt make a difference either for the above results. We need to resolve this latency issue as this is causing the resulting web pages to load very slowly (almost 15+ seconds which is simply unacceptable). The ec2 instances have Windows Server Datacenter 64 bit. Let me know if there is any other infor I can provide to help diagnose this.

    Read the article

  • My Linux desktop sees my HDMI-connected monitor, but my monitor says "No signal"

    - by hrunting
    I have a Gigabyte H55M-UD2H motherboard and an Acer S271HL monitor. When I connect the monitor to the motherboard via VGA, signal works perfectly. When I connect the monitor via HDMI, the system "sees" the connection, but the monitor receives no signal (the monitor shows a blue box which reads "No Signal" and then the monitor goes into power-saving state). Some fun facts about this: if I hook a different monitor to this box via HDMI, the monitor receives the output without issue (same computer/motherboard, same cable, different monitor) if I connect a different computer to the monitor via HDMI, the monitor receives the output without issue (different computer, same cable, same monitor) no signal is received whether in the OS or in the BIOS there are no BIOS options for controlling video output other than for selection of onboard vs. PCI/PCI-E-based video card (the system has no dedicated video card installed) The box is running Linux, so I have the output of xrandr which shows the connection and the monitor modes detected via DDC: ~$ xrandr --prop Screen 0: minimum 320 x 200, current 1920 x 1080, maximum 8192 x 8192 VGA1 disconnected (normal left inverted right x axis y axis) HDMI1 disconnected (normal left inverted right x axis y axis) Broadcast RGB: Full supported: Full Limited 16:2 audio: auto supported: force-dvi off auto on DP1 disconnected (normal left inverted right x axis y axis) Broadcast RGB: Full supported: Full Limited 16:2 audio: auto supported: force-dvi off auto on HDMI2 connected 1920x1080+0+0 (normal left inverted right x axis y axis) 598mm x 336mm EDID: 00ffffffffffff000472ca028d128022 1c160103803c2278ca7b45a4554aa227 0b5054bfef80714f8140818081c08100 9500b300d1c0023a801871382d40582c 450056502100001e000000fd00384c1f 5311000a202020202020000000fc0053 323731484c0a202020202020000000ff 004c55573044303130383531300a01e5 020324f14f0102030405060790111213 1415161f230907078301000067030c00 1000382d023a801871382d40582c4500 56502100001f011d8018711c1620582c 250056502100009f011d007251d01e20 6e28550056502100001e8c0ad08a20e0 2d10103e960056502100001800000000 000000000000000000000000000000de Broadcast RGB: Full supported: Full Limited 16:2 audio: auto supported: force-dvi off auto on 1920x1080 60.0*+ 50.0 25.0 30.0 1680x1050 59.9 1680x945 60.0 1400x1050 74.9 59.9 1600x900 60.0 1280x1024 75.0 60.0 1440x900 75.0 59.9 1280x960 60.0 1366x768 60.0 1360x768 60.0 1280x800 74.9 59.9 1152x864 75.0 1280x768 74.9 60.0 1280x720 50.0 60.0 1440x576 25.0 1024x768 75.1 70.1 60.0 1440x480 30.0 1024x576 60.0 832x624 74.6 800x600 72.2 75.0 60.3 56.2 720x576 50.0 848x480 60.0 720x480 59.9 640x480 72.8 75.0 66.7 60.0 59.9 720x400 70.1 HDMI3 disconnected (normal left inverted right x axis y axis) Broadcast RGB: Full supported: Full Limited 16:2 audio: auto supported: force-dvi off auto on DP2 disconnected (normal left inverted right x axis y axis) Broadcast RGB: Full supported: Full Limited 16:2 audio: auto supported: force-dvi off auto on DP3 disconnected (normal left inverted right x axis y axis) Broadcast RGB: Full supported: Full Limited 16:2 audio: auto supported: force-dvi off auto on How do I get this monitor to recognize the output from this HDMI socket?

    Read the article

  • Varnish configuration, NamevirtualHosts, and IP Forwarding

    - by Brent
    I currently have a bunch of NameVirtualHost based websites, load balanced between 3 apache2 servers using ldirectord. I would like to insert varnish as a reverse-web-proxy between ldirectord and apache in the following way: a request comes in to ldirectord it is then load balanced between the 3 apache2 servers and varnish, with a weight of 1 for the webservers, and 99 for varnish (so if varnish is rebooted, the webservers will take over seamlessly) varnish will then load balance its requests between my apache2 servers. However, the varnish part is not working. I wonder whether this has to do with the fact that my apache servers use x.x.x.x:80 for their NameVirtualHosts, instead of *:80? (they have to do this, since each server hosts multiple IP addresses) Or perhaps it has to do with the need for IP Forwarding to be set up on the varnish server? (I did echo 1 /proc/sys/net/ipv4/ip_forward on this server, is that sufficient?) How can I debug this problem? ldirectord doesn't produce logs of what it does with each request (and if it did, I would be overwhelmed with information since I'm serving hundreds of requests per second) varnish log shows the ldirectord server connecting to it every 5 seconds, but nothing else. I have set up a test site using this configuration, but it fails - no apache access logs, no applicable varnish logs.

    Read the article

  • Backup with Mercurial and Robocopy?

    - by Andrew Neely
    The Problem We would like to backup our critical files from several network shares to a removable hard drive. We want to automate the backup so we don't have to remember to run it. It needs to finish overnight. Furthermore, we want to be able to preserve multiple versions of each file so we can back out of our user's mistakes easier. Background Information I work in a large Windows-based enterprise with a centralized IT section who is responsible for all backups. Their backups are geared towards disaster recovery and not user error, and require upper-level management approval for any non-disaster recoveries. Several times we have noticed that our backups have failed, we weren't notified. I do not have administrative rights to the server or my desktop. We are trying to backup some 198,000 files spanning about 240 gigabytes. These files rarely change. Our backup drive is one terabyte. My Proposed Solution What I would like to do is to write a batch file using Robocopy with the /mir option along with Mercurial SCM to store all versions of the file. I would do an hg add followed by an hg commit before each execution of Robocopy to save the current state, and then make a mirrored copy of the file structures. The problem is the /mir will delete every folder not present in the source, and Mercurial stores the repository in a .hg folder in the destination folder. Does anybody know how I could either convince Mercurial to store the .hg folder elsewhere, or convince Robocopy not to delete it from the destination? I'm trying to avoid writing a custom program do to copying.

    Read the article

  • Setup.exe called from a batch file crashes with error 0x0000006

    - by Alex
    We're going to be installing some new software on pretty much all of our computers and I'm trying to setup a GPO to do it. We're running a Windows Server 2008 R2 domain controller and all of our machines are Windows 7. The GPO calls the following script which sits on a network share on our file server. The script it self calls an executable that sits on another network share on another server. The executable will imediatelly crash with an error 0x0000006. The event log just says this: Windows cannot access the file for one of the following reasons: there is a problem with the network connection, the disk that the file is stored on, or the storage drivers installed on this computer; or the disk is missing. Windows closed the program Setup.exe because of this error. Here's the script (which is stored on \\WIN2K8R2-F-01\Remote Applications): @ECHO OFF IF DEFINED ProgramFiles(x86) ( ECHO DEBUG: 64-bit platform SET _path="C:\Program Files (x86)\Canam" ) ELSE ( ECHO DEBUG: 32-bit platform SET _path="C:\Program Files\Canam" ) IF NOT EXIST %_path% ( ECHO DEBUG: Folder does not exist PUSHD \\WIN2K8R2-PSA-01\PSA Data\Client START "" "Setup.exe" "/q" POPD ) ELSE ( ECHO DEBUG: Folder exists ) Running the script manually as administrator also results in the same error. Setting up a shortcut with the same target and parameters works perfectly. Manually calling the executable also works. Not sure if it matters, but the installer is based on dotNETInstaller. I don't know what version though. I'd appreciate any suggestions on fixing this. Thanks in advance! UPDATE I highly doubt this matters, but the network share that the script is hosted in is a shared drive, while the network share the script references for the executable is a shared folder. Also, both shares have Domain Computers listed with full access for the sharing and security tabs. And PUSHD works without wrapping the path in quotes.

    Read the article

  • Menu command stuck on screen

    - by 280Z28
    For some reason, periodically when I select a menu command, the command label gets "stuck" on the screen and won't go away. I can close all open applications, including whichever one I was using when it got stuck, but it still won't go away. In the screenshot below, I opened an new instance of IE just to show how the label stays on top. The label was not created by this instance of IE. Edit with the source: The label that gets stuck is the first menu command I select in IE. If a label is already stuck, a new one does not get stuck (regardless of which instance(s) of IE are involved). Based on this knowledge, I now just open IE on my secondary monitor, carefully open the context menu so the Properties command is in the bottom corner, and click it. This is not a solution... The label never moves and is transparent to mouse input (if I click it, it's as if I clicked the item behind it). The label does not go away if I close all running applications. I haven't tried stopping services or closing system tray items like Live Mesh. The label does go away if I change the screen resolution and then change it back. Any ideas how I can stop this from happening? It's happened a half dozen times since yesterday and it's becoming quite disrupting to my work. Obviously I added the circle in MS Paint. That part isn't stuck. ;)

    Read the article

  • I need an admin toolset for Windows 2003 and 2008

    - by eugeneK
    i know this is way too general question but anyway. I need few tools, will write down my tasks as sysadmin and if you have any to automate my job i would be glad to hear. I don't mind paying for software needed unless it is way too expensive. First of i backup all files on server at local/office storage. I 7zip all SQL backup files and then move them over network to centralized location and then FTP them from office PC which has no FTP server installed and cannot have one. Backups happen at 4AM at the morning thus i need to set time for compressing and afterward FTPing. Then i FTP all IIS web application as differentiation backup, same goes for VOD movies. Second tool i need is system monitor which will monitor all servers from themselves and from external location for CPU/Memory/Hard disk and other basic failures. This tool should able to execute Website address with parameters which will send me an email with all report on failure. Third tool i need is a way to get all Event Logs from 10 Windows based servers without accessing each any of them manually. If you know any solution, thanks in advance.

    Read the article

  • Extending home network using multiple access point originating from another access point

    - by cyberjar09
    I have a home network with the current setup (RJ45 running from main router to access point) : +-------------+ +--------------+ | 192.168.2.1 | |192.168.2.2 | | router +--------------->access point 1| +-----^-------+ +--------------+ | | +-----+--------+ | 192.168.1.1 | | modem | +-----^--------+ | | | | +--+--+ | ISP | +-----+ However I would like to extend the network to two more floors in the house via the existing Access Point (router is too far and not reachable using a network cable, hence I need to extend using current access point). Please see diagram below : +-------------+ +--------------+ +----------------+ | 192.168.2.1 | |192.168.2.2 | | 192.168.2.3 | | router +--------------->access point 1+----------> access point 2 | +-----^-------+ +--------+-----+ +----------------+ | | | | +-----+--------+ | | 192.168.1.1 | | | modem | | +-----^--------+ | +----------------+ | +----------------> 192.168.2.4 | | | access point 3 | | +----------------+ | +--+--+ | ISP | +-----+ Q1 : is this setup possible? Q2 : if possible, will I have to do anything different from what I did to setup access point 1? edit 1 : I am trying to study the dd-wrt documentation to see which would be the correct mode of operation for me Linking Routers but Im confused because I dont see any info on how to use an existing Access point to extend the signal of the SSID. If anyone could point me to the correct wiki for how I should setup AP2 and AP3 based on AP1, it would be very helpful. For AP1, I did the following Use static IP and setup same SSID as primary wireless router use same security as primary wireless router make AP1 point to 192.168.2.1 (primary router) for DHCP Thanks in advance.

    Read the article

  • Will adding a SSD cache device to my ZFS storage improve performance?

    - by Sysadminicus
    The server has 4GB of RAM and my zpool is made up of 15.5k SAS drives arranged like this: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c0t2d0 ONLINE 0 0 0 c0t3d0 ONLINE 0 0 0 c0t4d0 ONLINE 0 0 0 c0t5d0 ONLINE 0 0 0 c0t6d0 ONLINE 0 0 0 c0t7d0 ONLINE 0 0 0 c0t8d0 ONLINE 0 0 0 raidz1-1 ONLINE 0 0 0 c0t10d0 ONLINE 0 0 0 c0t11d0 ONLINE 0 0 0 c0t12d0 ONLINE 0 0 0 c0t13d0 ONLINE 0 0 0 c0t14d0 ONLINE 0 0 0 spares c0t9d0 AVAIL c0t1d0 AVAIL The primary use is as an NFS store for a couple VMWare ESXi servers. I can't do any "true" benchmarks because this is a production system (no budget for test systems), but using dd and bonnie++ I can't get more than ~40-50MB/s writes and ~70-90MB/s reads. It seems I should be able to do much better, but I'm not sure where to optimize. Based on what I've read, I think dropping in a OCZ Vertex 2 Pro SSD as my L2ARC is going to be the best bang-for-the-buck to improve througput. Is there something else I should be looking into to help performance? If not... How do I know how big a cache device I need? Am I safe with only a single SSD as my cache device?

    Read the article

  • Routing a single request through multiple nginx backend apps

    - by Jonathan Oliver
    I wanted to get an idea if anything like the following scenario was possible: Nginx handles a request and routes it to some kind of authentication application where cookies and/or other kinds of security identifiers are interpreted and verified. The app perhaps makes a few additions to the request (appending authenticated headers). Failing authentication returns an HTTP 401. Nginx then takes the request and routes it through an authorization application which determines, based upon identity and the HTTP verb (put, delete, get, etc.) and URL in question, whether the actor/agent/user has permission to performed the intended action. Perhaps the authorization application modifies the request somewhat by appending another header, for example. Failing authorization returns 403. (Wash, rinse, repeat the proxy pattern for any number of services that want to participate in the request in some fashion.) Finally, Nginx routes the request into the actual application code where the request is inspected and the requested operations are executed according to the URL in question and where the identity of the user can be captured and understood by the application by looking at the altered HTTP request. Ideally, Nginx could do this natively or with a plugin. Any ideas? The alternative that I've considered is having Nginx hand off the initial request to the authentication application and then have this application proxy the request back through to Nginx (whether on the same box or another box). I know there are a number of applications frameworks (Django, RoR, etc.) that can do a lot of this stuff "in process", but I was trying to make things a little more generic and self contained where different applications could "hook" the HTTP pipeline of Nginx and then participate in, short circuit, and even modify the request accordingly. If Nginx can't do this, is anyone aware of other web servers that will perform in the manner described above?

    Read the article

  • VNC server failed to start CentOS

    - by Shaun
    I followed a tutorial on how to install and get VNCserver to run on CentOS 6 (since freenx isnt supported yet) and I keep getting Starting VNC server: 1:user [FAILED] How do I figure out whats going on here? Im new to Linux/CentOS and im trying to get RDP going so I can step away from SSH as much as possible (you know us Windows users love our pretty GUI's). So, where is the error log at and how do I find it? Or maybe someone else has experienced this and knows the solution based on the simple error given? After running in debug mode, here is my error + . /etc/init.d/functions ++ TEXTDOMAIN=initscripts ++ umask 022 ++ PATH=/sbin:/usr/sbin:/bin:/usr/bin ++ export PATH ++ '[' -z '' ']' ++ COLUMNS=80 ++ '[' -z '' ']' +++ /sbin/consoletype ++ CONSOLETYPE=pty ++ '[' -f /etc/sysconfig/i18n -a -z '' -a -z '' ']' ++ . /etc/profile.d/lang.sh ++ unset LANGSH_SOURCED ++ '[' -z '' ']' ++ '[' -f /etc/sysconfig/init ']' ++ . /etc/sysconfig/init +++ BOOTUP=color +++ RES_COL=60 +++ MOVE_TO_COL='echo -en \033[60G' +++ SETCOLOR_SUCCESS='echo -en \033[0;32m' +++ SETCOLOR_FAILURE='echo -en \033[0;31m' +++ SETCOLOR_WARNING='echo -en \033[0;33m' +++ SETCOLOR_NORMAL='echo -en \033[0;39m' +++ PROMPT=yes +++ AUTOSWAP=no +++ ACTIVE_CONSOLES='/dev/tty[1-6]' +++ SINGLE=/sbin/sushell ++ '[' pty = serial ']' ++ __sed_discard_ignored_files='/\(~\|\.bak\|\.orig\|\.rpmnew\|\.rpmorig\|\.rpmsave\)$/d' + '[' -r /etc/sysconfig/vncservers ']' + . /etc/sysconfig/vncservers ++ VNCSERVERS='1:larry 2:moe 3:curly' ++ VNCSERVERARGS[1]='-geometry 800x600' ++ VNCSERVERARGS[2]='-geometry 640x480' ++ VNCSERVERARGS[3]='-geometry 640x480' + prog='VNC server' + RETVAL=0 + case "$1" in + start + '[' 0 '!=' 0 ']' + . /etc/sysconfig/network ++ NETWORKING=yes ++ HOSTNAME=vps.binaryvisionaries.com ++ DOMAINNAME=server.name ++ GATEWAYDEV=venet0 ++ NETWORKING_IPV6=yes ++ IPV6_DEFAULTDEV=venet0 + '[' yes = no ']' + '[' -x /usr/bin/vncserver ']' + '[' -x /usr/bin/Xvnc ']' + echo -n 'Starting VNC server: ' Starting VNC server: + RETVAL=0 + '[' '!' -d /tmp/.X11-unix ']' + for display in '${VNCSERVERS}' + SERVS=1 + echo -n '1:larry ' 1:larry + DISP=1 + USER=larry + VNCUSERARGS='-geometry 800x600' + runuser -l larry -c 'cd ~larry && [ -r .vnc/passwd ] && vncserver :1 -geometry 800x600' + RETVAL=1 + '[' 1 -eq 0 ']' + break + '[' -z 1 ']' + '[' 1 -eq 0 ']' + failure 'vncserver start' + local rc=1 + '[' color '!=' verbose -a -z '' ']' + echo_failure + '[' color = color ']' + echo -en '\033[60G' + echo -n '[' [+ '[' color = color ']' + echo -en '\033[0;31m' + echo -n FAILED FAILED+ '[' color = color ']' + echo -en '\033[0;39m' + echo -n ']' ]+ echo -ne '\r' + return 1 + '[' -x /usr/bin/plymouth ']' + /usr/bin/plymouth --details + return 1 + echo + '[' 1 -eq 98 ']' + return 1 + exit 1

    Read the article

  • Tool to modify properties/metadata of a PDF? i.e. Change "Title", "Author"? Sony Reader showing som

    - by Chris W. Rea
    I own a Sony Reader PRS-600 ebook reader. I bought a ton of Manning Publications ebooks (DRM-free) recently. Many of the books are PDFs since not all the ones I wanted are available in epub format. The problem: Some of the PDF books I purchased have incorrect or missing metadata. Making things worse, the Sony Reader only displays the "Title" from the PDF metadata when displaying book titles in the reader's collection of books! The Reader doesn't display the filename. So, even though I have a PDF informatively named "Windows PowerShell In Action.pdf", it shows up as "untitled" in the Reader. Imagine how useful the Reader's list of book titles becomes when many are just "untitled" or "unnamed document" ! Yes, it is maddening. So – short of expecting the publisher to fix the files or Sony to add a filename-based list instead, I'm looking for a way to fix the PDF metadata. I can view the metadata with Adobe Reader, but it doesn't permit modification of the properties. Leading to: Question: Is there a tool – free, or cheap – and either for PC or Mac, that can modify the properties / metadata of a DRM-free PDF document? I want to correct "Title" and "Author" fields, specifically.

    Read the article

  • Tracking Security Vulnerability remediation

    - by Zypher
    I've been looking into this for a little while, but havn't really found anything suitable. What I am looking for is a system to track security vulnerability remdiation status. Something like "bugzilla for IT" What I am looking for is something pretty simple that allows the following: batch entry of new vulnerabilities that need to be remediated Per user assignment AD/LDAP Authentiation Simple interface to track progress - research, change control status, remediated, etc. Historical search ability Ability to divide by division Ability to store proof of resolution for the Security Team to access Dependency tracking Linux based is best (that's my group :) ) Free is good, but cost doesn't matter so much if the system is worth it The systems doesn't have to have all of these features, but if it did that would be great. yes we could use our helpdesk software, but that has a bunch of pitfalls such as triggering SLA alerts and penalties as well as not easily searchable outside of a group. Most of what I have found are bug tracking systems that are geared towards developers, and are honstely way overkill for what I am looking for. Server Faults input is greatly appreciated as always!

    Read the article

< Previous Page | 694 695 696 697 698 699 700 701 702 703 704 705  | Next Page >