Search Results

Search found 9916 results on 397 pages for 'counting sort'.

Page 297/397 | < Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >

  • How do you do a keyword search the Services.msc (mmc) window in Windows 7?

    - by Warren P
    When you want to run a service, you have very limited capabilities, in all current Windows versions, as far as I can tell. I usually start Services by typing "services.msc" into the Start-Run box, on most versions of Windows, this works. I know how to click the "Name" column in the MMC view of Windows Services. If you know what the first few characters of a service name is, you can usually sort by the name, and type the prefix to scroll the list down (find Windows Search for example). This seems pretty weak to me, so I spent some time searching the interwebs for tools that do a better job of managing services. Usually I have a keyword that I know "fooWare" might be the keyword, and I need to find the (usually badly named) service and start it and stop it. This is often WAY too hard. The best I could do is "NET SERVICES" from the command line, and maybe add a grep in there, but that doesn't list every service, only a few of them. And the MMC snap-in in Win7 now has an Export List button, exporting to csv text file feature which I have used from time to time, to export and then search. I have thought of writing my own tool. I'm hoping a better "service manager" utility exists out there that sysadmins use. I'd like a search box at the top right corner, kind of the same way that the Add-Remove-Programs dialog in Win7 and Vista has a search facility. Does such a services utility exist out there?

    Read the article

  • Possible to host CentOS netinstall files on a local HTTP/FTP?

    - by garlicman
    I'm running XenServer on an Dell R610 and am running into a catch-22. During install from DVD, CentOS can't find the DVD package catalogue. It's a reported error for some, XenServer + CentOS6 + DVD install in some hardware configurations = failed install. Yes, I checked the MD5 and let the disc test pass. In every reported case, the netinstall was the solution. The issue is my net access is required to go through a web proxy that prompts before you can download a file. This naturally breaks any download automation. I've been waiting on our IT to put in an exception rule to allow my lab to bypass the prompt, but it's been over 3 weeks now and they don't seem responsive. (I've been working on this a day or two a week) I want to try and host the netinstall files local in my Xen network. Right now I only have a bunch of Windows based VMs, CentOS won't install so I don't have any Linux tools. I had tried simply hosting all the DVD contents off one of the Windows servers using Mongoose. (I didn't want to setup IIS) I copied them to a hosted sub-directory similar to all the mirrors out there (e.g. http:///centos/6.2/os/i386/) with no auth or anything. Then in the netinstall I correctly pointed to it. I now realize just copying the DVD files over won't work. The repodata will point to a local device, not the site I'm hosting. (e.g. the DVD repodata includes xml that points to where the packages are) Clearly I'm hosting them over HTTP, not from a DVD. Is there an easy way to sort this out? I'm just trying to install CentOS6 on Xen. If there's a turnkey downloadable Xen image with CentOS 6.2 on it, or a downloadable repo image, I'll take that too! Thank you in advance!

    Read the article

  • How do I deliver mail for wildcard addresses to a particular user/alias/program?

    - by David M
    I need to configure sendmail so that mail delivered for wildcard addresses is accepted for delivery and then delivered to a user, alias, or directly to a script. I can rewrite the envelope/headers any number of ways, but I don't know how to accept the wildcard address when it's provided in RCPT TO: Everything I've tried so far winds up with a 550 user unknown error. So here's a specific example: I want to be able to handle any address that consists of a series of digits followed by a dot followed by a word, then pipe that to a script. If the headers get rewritten, that's OK, but I need the envelope to contain the actual Delivered-To address. Here's the sort of SMTP session I need: 220 blah.foo.com ESMTP server ready; Thu, 22 Apr 2010 20:41:08 -0700 (PDT) HELO blort.foo.com 250 blah.foo.com Hello blort.foo.com [10.1.2.3], pleased to meet you MAIL FROM: <[email protected]> 250 2.1.0 <[email protected]>... Sender ok RCPT TO: <[email protected]> 250 2.1.5 <[email protected]>... Recipient ok I tried some stuff with regex maps, but I never got past 550 user unknown.

    Read the article

  • Website Use Monitoring for 3 People

    - by linkedlinked
    I work in an IT startup with 2 partners, and I'm the programmer/IT guy -- in other words, the work horse. To make a long story short, I'm doing most of the work right now, while they spend all day on Facebook. That's OK, because they're paying my salary, but if the project fails, I'm sure they'll blame me for it (I'm doing my best to make sure that doesn't happen!), and I want some sort of recourse. I already have an app that blocks time-wasters on my local PC, and keeps logs of when the app is enabled (so I can say "I had Facebook blocked from 9am-5pm today.") Is there any way I can get a brief summary of the most heavily visited sites, split up by client PC? At the end of the month, I want to be able to say "You both load Facebook, on average, every 10 minutes. You spend hours a day on Youtube, and haven't opened up our bugtracker in weeks" and maybe have a nifty chart or graph to match it. We have a crappy D-Link router, and no IT budget. They are both on Windows Vista, I run Ubuntu Linux. I don't want to install any monitoring software on their PC, but I'm totally fine with, say, routing all the network traffic through my machine. I guess I can think of lots of ways to accomplish this (telnet into JSSH and list open tabs? log all the DNS requests, per-domain? even thinking of setting up a webcam on my desk and just keeping 5-minute snapshots...), I just don't really know where to start. Any advice is appreciated, thanks!

    Read the article

  • Courier IMAP always disconnects since update

    - by Raffael Luthiger
    Since one of our customers updated their server courier does not handle IMAP connections properly any more. POP3 works without any problems. When I try to test IMAP with telnet then it is always like this: $ telnet domain.com 143 Trying 188.40.46.214... Connected to domain.com. Escape character is '^]'. * OK [CAPABILITY IMAP4rev1 UIDPLUS CHILDREN NAMESPACE THREAD=ORDEREDSUBJECT THREAD=REFERENCES SORT QUOTA IDLE ACL ACL2=UNION STARTTLS] Courier-IMAP ready. Copyright 1998-2011 Double Precision, Inc. See COPYING for distribution information. 01 LOGIN [email protected] test Connection closed by foreign host. I enabled debugging in the authdaemond but the output does not really help much: Apr 12 23:10:04 servername authdaemond: received auth request, service=imap, authtype=login Apr 12 23:10:04 servername authdaemond: authmysql: trying this module Apr 12 23:10:04 servername authdaemond: SQL query: SELECT login, password, "", uid, gid, homedir, maildir, quota, "", concat('disableimap=',disableimap,',disablepop3=',disablepop3) FROM mail_user WHERE login = '[email protected]' Apr 12 23:10:04 servername authdaemond: password matches successfully Apr 12 23:10:04 servername authdaemond: authmysql: sysusername=<null>, sysuserid=5000, sysgroupid=5000, homedir=/var/vmail, [email protected], fullname=<null>, maildir=/var/vmail/domain.com/test, quota=0, options=disableimap=n,disablepop3=n Apr 12 23:10:04 servername authdaemond: Authenticated: sysusername=<null>, sysuserid=5000, sysgroupid=5000, homedir=/var/vmail, [email protected], fullname=<null>, maildir=/var/vmail/domain.com/test, quota=0, options=disableimap=n,disablepop3=n Right after the "Authenticated" line the output stops. There is no other message. And in no other log file I've checked I could find any other related message. The system was updated from Ubuntu 10.10 to 12.04. How could I get more information? Or does anybody have an idea what could go wrong here?

    Read the article

  • 0% CPU in top for all processes, but load average > 1

    - by chrisdew
    On two different servers (with Ubuntu 12.04LTS AMD64) I have seen the following behaviour: op - 10:50:05 up 305 days, 21:17, 1 user, load average: 1.94, 2.52, 2.97 Tasks: 141 total, 2 running, 139 sleeping, 0 stopped, 0 zombie Cpu(s): 41.5%us, 6.5%sy, 0.0%ni, 51.8%id, 0.0%wa, 0.2%hi, 0.1%si, 0.0%st Mem: 8178432k total, 5753740k used, 2424692k free, 159480k buffers Swap: 15625208k total, 0k used, 15625208k free, 4905292k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 20 0 23928 2072 1216 S 0 0.0 0:56.42 init 2 root 20 0 0 0 0 S 0 0.0 0:00.01 kthreadd 3 root RT 0 0 0 0 S 0 0.0 0:01.23 migration/0 4 root 20 0 0 0 0 S 0 0.0 2:39.82 ksoftirqd/0 5 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0 6 root RT 0 0 0 0 S 0 0.0 0:02.99 migration/1 7 root 20 0 0 0 0 S 0 0.0 2:32.15 ksoftirqd/1 8 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/1 9 root RT 0 0 0 0 S 0 0.0 0:11.67 migration/2 10 root 20 0 0 0 0 S 0 0.0 29:00.34 ksoftirqd/2 The server is working fine, but top shows all processes as using 0% CPU. A reboot fixed this on an earlier machine, but I haven't yet tried it on this one. I have tried top several times, and so am sure that I haven't accidentally pressed '<' or '' to sort by a different column. Sorting the process list by all of the available columns, stills shows 0% CPU for all displayed processes. What is going on? If this a kernel bug? Update: If I use top -p <PID> for a know, busy process, top still displays 0% CPU for that process.

    Read the article

  • Domain Controller died, now get authentication boxes in IE for SDL Tridion 2009

    - by Rob Stevenson-Leggett
    We had a major network issue where our secondary domain controller (responsible for Win2k3 boxes) died and had to be rebuilt (I beleive this is what happened, I am a developer not network admin). Anyway, I am working remotely via VPN at the moment and since this happened, I am getting an authentication box when trying to access certain areas of SDL Tridion via IE (Tridion 2009 SP1 is IE only) it seems like somewhere my credentials are not being passed correctly or the ones cached on my laptop do not match the ones the Domain Controller has. This only seems to affect Windows 2003 servers. Our IT support thinks that the only way to sort it out is to connect my laptop directly to the network. I am not planned to go to the office for a few weeks at least and this issue means I have to work with Tridion via Remote Desktop. We thought changing the password on my account might work but this didn't help. So basically my question is, is there any way I can reset my credential cache without having to reconnect to the network? Or is it IE that is causing the problem perhaps, since I can RDP to servers and use Tridion 2011 instances in other browsers fine? I am on Windows 7 using SonicWall VPN client.

    Read the article

  • Sending mail through local MTA while domain MX records point to Google Apps

    - by Assaf
    My domain's email is managed by Google Apps, so that domain users get Gmail and Calendar, etc. But I also want to be able to send applicative notifications to users outside the domain via email (e.g. "some commented on your post", and so on). However, if I try to send email through code I get blocked by Gmail after a few emails. I send marketing email through MailChimp, to minimize the risk of appearing as spam to my users (one-click unsubscribe, etc.). But I can't send applicative message in this way. I want to install a local MTA (my server runs Ubuntu), but I'm not sure what anti-spam measures I need to implement so that receiving MTAs don't think it's a spam server. What's stopping anyone from setting up a mail server and sending emails using my domain name? AFAIK it's the DNS records that show the MTA's address actually belongs to the domain. But my understanding of this is rather superficial, so someone please correct me if I'm wrong. But what sort of DNS configuration do I need to put in place so that I don't get blacklisted (assuming I don't actually spam anyone)? The MX records already point to Google, and I'd like to keep it this way. So do I just need to define an A record for my internal mail server? Should it show email as coming from a sub-domain, so as not to conflict with the bare domain being managed by google? Edit: Does the following SPF record make sense if I want email from my domain name to be sent by either google's servers or any server with a dns name ending with mydomain.com? "v=spf1 ptr mx:google.com mx:googlemail.com ~all" How should I set up reverse DNS for my server? If I have an A record that points mailsender.mydomain.com to my MTA's ip address, does it mean that reverse lookup will only allow emails sent from [email protected]?

    Read the article

  • Is there any way to synchronize Outlook RSS Feeds with BlackBerry?

    - by nvuono
    Does anyone know how I can view the contents of my Outlook 2007 RSS Feeds from a corporate-issued BlackBerry? Our Inbox and Calendar are already integrated with corporate exchange servers but it looks like nobody cares too much about the RSS Feeds. Is there some setting on my Blackberry or in Outlook I could possibly tweak to include these updates? I know there are many standalone RSS readers available for blackberry (Google Reader for example) but I mention Outlook RSS Feeds specifically in my question because I am subscribing to a number of RSS feeds I've setup on my intranet for various version control systems that would be inaccessible to an external RSS reader. It seems like I might have to setup some sort of email commit notifications if I want anything from my blackberry but I much prefer the 'pull' method of an RSS feed viewer over receiving streams of emails. Please feel free to suggest any alternatives! Edit: I've additionally tried moving my "SVN Repository" folder directly into my Mailbox instead of keeping it as a child of the RSS Feeds folder. This allows me to view the SVN Repository folder on my blackberry where previously the RSS Feeds folder and all children were hidden but unfortunately it never seems to get populated with the items that are displaying in Outlook. I've even made a fresh commit to make sure that the SVN Repository folder still works correctly in Outlook from outside the RSS Feeds folder but no luck on the BlackBerry end of things. BlackBerry Model Details: BlackBerry 8310 smartphone (EDGE) v4.2.2.170 Platform 2.5.0.30

    Read the article

  • .htaccess redirect to error page if port is not 80

    - by Momo
    I'm running a portable server through usb stick. The thing is I also have WAMP installed in my local machine and Apache somehow gets started on windows startup, because of some random reason which I don't recall now and it can't be changed. I want to prepare my portable server in situations like this, so closing httpd.exe from process and starting my portable server is not an option. Anyway, because of already active httpd.exe my portable server's WordPress site can only be accessed through localhost:81 - this is a problem as WP site is very dependent on the URL and I don't want to include the url with port on WP database. Here is what I want to do through .htaccess: On any path except for error.php file check if not port 80 If not port 80 redirect to /error.php?code=port It it possible for it to have priority over WP redirection or URL handling? In the error.php I provided info on how to manually close httpd.exe and such so my family and friends can access the portable site. It's sort of like a gallery and calender application for events and other such stuff... Please help? I'm I can't figure it out at all. I know others may not have apache already running, but I want to prepare for such a situation. Something like the following, but the following doesn't work. # BEGIN WordPress <IfModule mod_rewrite.c> <If "%{SERVER_PORT} = 80"> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </If> <Else> RewriteEngine On RewriteRule ^(error.php)($|/) - [L] RewriteRule ^(.*)$ /error.php?code=port [L] </Else> </IfModule> # END WordPress By the way, the portable server Server2Go automatically generates vhosts based o the hostname set on it's config file and changes ports if the port (e.g. 80) is already open.

    Read the article

  • How to find process that's using 100% of CPU

    - by Gabriel
    As i'm looking at htop and top i see that my processor usage is 100% allways. But i can not see any process that is using that much CPU. Htop shows me only 1-2 processes that use around 5% cpu time. Is there a way to find the processes that use that much cpu time? Here is the output of ps -eo pcpu,pid,user,args | sort -r -k1 | less %CPU PID USER COMMAND 0.8 20413 root jsvc.exec -user tomcat -cp ./bootstrap.jar -Djava.endorsed.dirs=../common/endorsed -outfile ../logs/catalina.out -errfile ../logs/catalina.err -verbose org.apache.catalina.startup.Bootstrap -security 0.3 631 mysql /usr/sbin/mysqld --basedir=/ --datadir=/var/lib/mysql --user=mysql --pid-file=/var/lib/mysql/mysql.pid --skip-external-locking 0.2 3380 root /usr/local/apache/bin/httpd -k restart -DSSL 0.2 24698 root tailwatchd 0.2 22472 root /usr/local/jdk/bin/java -Djava.util.logging.config.file=/usr/local/jakarta/tomcat/conf/logging.properties -Dfile.encoding=UTF8 -XX:MaxPermSize=128m -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djava.endorsed.dirs=/usr/local/jakarta/tomcat/common/endorsed -classpath /usr/local/jakarta/tomcat/bin/bootstrap.jar -Dcatalina.base=/usr/local/jakarta/tomcat -Dcatalina.home=/usr/local/jakarta/tomcat -Djava.io.tmpdir=/usr/local/jakarta/tomcat/temp org.apache.catalina.startup.Bootstrap start 0.1 32095 root cpanellogd - processing bandwidth 0.0 9733 root sleep 1m

    Read the article

  • IIS 7.0 - responses throttled to 500ms blocks?

    - by Julia Hayward
    Scenario: ASP.NET MVC wep app sitting on my local machine (Vista Ultimate, IIS 7.0), nothing going on except one user (me) logged in and viewing an index page. The page includes 9 dynamic images drawn from the underlying DB and returned from a controller action. I have got the actual processing time for these images down to 15ms each. Turn on Firebug and watch the page load. What I see is 9 requests for images firing off together – no surprise – but four come back to me almost immediately; two more after 0.5s; another after 1s; then at 1.5s and 2s. Logging on the server side suggests the individual responses are still only taking 15ms. So it appears IIS is queueing things up into 500ms chunks. (Repeating the experiment produces different results, but each time the images return in similar blocks – you might get three in the first group, then three at 0.5s, two at 1s etc, for example – and it’s always at 500ms intervals, not anything else.) It’s also repeatable cross-browser, and it’s not repeatable with other forms of content. I haven't found any particular mention of this problem out there, so I'm sort of assuming it's not an IIS bug, so is it: i) IIS on desktop OSs deliberately does it, to make you use server OSs in production? ii) There is some magical setting that has eluded me for as long as I’ve known IIS? iii) Something peculiar to MVC or SQL Server 2008? or something else?

    Read the article

  • "Safe" personal router use on apartment-wide network

    - by noisetank
    I recently moved into an apartment with internet included in my rent. This was a boon at first, but now I'm feeling limited. To get devices connected (wired or wireless), I have to whitelist the MAC addresses on mycampusnet.com. This is annoying (considering I'm well over the 10 device limit including my roommate's stuff), but what's really driving me mad is that I don't seem to have any semblance of a "local" network. I've relied heavily on static IPs and port forwarding in the past (accessing NAS and remote desktop) and (as far as I can understand), that functionality is nonexistent without my router set up. Also, as my wired and wireless devices don't always seem to make it onto the same subnet, I'm unable to use any of my iDevices with my Apple TV (I can, however, mirror to no less than four strangers' Apple TVs at any moment, which is a whole other level of discomforting). I've talked to the head of the apartment complex and she told me that they personally don't have any issue with my using a router, but the provider (CampusConnect) does not currently allow it. Apparently, enough people have put in complaints/requests about the restriction (the apartments are for graduate students and University staff, many of which need to set up things like VPNs for work reasons) to open up some sort of ticket to get the functionality in place, but all the calls I've made to get status updates have been a waste of time. My question is: If I plugged my router into the apartment network, what would happen? I've been told already that personal routers would "interfere with the wireless" and that they would shut my port down if I used one, but is that a legitimate thing or just something made up that sounds real to keep the average Joe from pushing it further? I'm guessing there's some way of configuring my router to keep it from disrupting the rest of the network, but it's not something they want to tell me for obvious reasons. Am I right? And if so, what are the chances that they'd notice the difference in traffic or whatever and shut off my port?

    Read the article

  • virtual memory commited

    - by vinu
    After a server bounce happens, and after around 40-45 days time period, we receive continuous “Committed Virtual Memory” alerts which indicates the usage of swap space in the magnitude of 4GB This also causes the application to perform very slowly and experience a number of stalled transactions. Server Setup: 4 Tomcat Servers (version 7.0.22) that are load balanced (not clustered) by 2 Apache Servers. And the Apache servers themselves supply static content and routing to these 4 tomcat servers. Java Runtime Version: java version "1.6.0_30" Java(TM) SE Runtime Environment (build 1.6.0_30-b12) Java HotSpot(TM) 64-Bit Server VM (build 20.5-b03, mixed mode Memory Startup Parameters: MEMORY_OPTIONS="-Xms1024m -Xmx1024m -Xss192k -XX:MaxGCPauseMillis=500 -XX:+HeapDumpOnOutOfMemoryError -XX:MaxPermSize=256m -XX:+CMSClassUnloadingEnabled" Monitoring – Wily monitoring is available in all the production servers that monitors key server parameters and sends out configurable alert emails based on pre defined settings. Note: Each of the servers also has two other separate tomcat domains that run different applications Investigated area: There is no Heap Memory Leak and the GC is running fine without any issues over any period of time The current busy thread count corresponds directly to the application usage – weekends and nights have lesser no. of threads compared to business hours ThreadLocal uses a WeakReference internally. If the ThreadLocal is not strongly referenced, it will be garbage-collected, even though various threads have values stored via that ThreadLocal. Additionally, ThreadLocal values are actually stored in the Thread; if a thread dies, all of the values associated with that thread through a ThreadLocal are collected. If you have a ThreadLocal as a final class member, that's a strong reference, and it cannot be collected until the class is unloaded. But this is how any class member works, and isn't considered a memory leak. The cited problem only comes into play when the value stored in a ThreadLocal strongly references that ThreadLocal—sort of a circular reference. In this case, the value (a SimpleDateFormat), has no backwards reference to the ThreadLocal. There's no memory leak in this code. Can anyone please let me know what could be the cause of this and what to be monitored?

    Read the article

  • What could cause random files being downloaded without permission?

    - by Dustin
    I have been having issues lately with a certain directory. It seems someone is placing files into it, or something of that sort, and any attempt to delete them is successful, HOWEVER they reappear over time (maybe not the exact same ones, but random files). I will provide you the information I can and several pictures of my problem: sandbox.mys4l.com/visual/files/b1.jpg Files like this have been appearing in my /visual/ folder, and I have no clue where they are coming from. sandbox.mys4l.com/visual/files/b2.jpg This is what is inside on of those weird files, it appears to be nothing problematic. sandbox.mys4l.com/visual/files/b4.jpg As you can see, in the time it took me to take the first picture, more odd files showed up. These log files are also being uploaded to this directory, and I know I didn't put them there. sandbox.mys4l.com/visual/files/b7.jpg This inside one of these mysterious .log files, I'm not sure what it's all about. These files only appear to be going into this specific area, and I'm not sure of their origin, only that they will not go away. I have done a full system scan at least twice with an up-to-date virus scan, and have looked for an unknown script which may be writing them there. Nothing has come up, so I come to you guys as I hear this is the best place to find answers. Hope this problem has a solution!

    Read the article

  • Configure one IIS site to handle two separate SSL certificates using external Load Balancing or SSL Acceleration Servers

    - by bmccleary
    I have one web application on our server that needs to be referenced by two different domain names, both of which have their own SSL certificates. The application is exactly the same for both domains, but we have to keep the two domain names for legal reasons. The problem is that, since both domains need to have their own SSL certificate, that inside of our IIS 7.5 configuration we have to have two separate IIS applications (both pointing to the same physical location) with their own unique IP address and SSL certificate installed. Now, I know that, due to the nature of SSL communications, that this is by design and that you can't assign more than one SSL certificate per IP address and domain name. My question is… is there any way around this limitation and keep one web application in IIS and have it service two SSL certificates based on host name? I know that with the basic IIS configuration that this is not possible, but I was thinking that with some sort of combination of external load balancing and/or SSL acceleration servers/services that we could have these servers process the SSL request and leave IIS clean to have one single application. I am not familiar at all with these technologies, hence the reason I am asking if it is theoretically possible. If not, does anyone else know how to achieve this?

    Read the article

  • HyperV - low CPU usage

    - by Klark
    I am very new to HyperV and virtual machine philosophy in general, so please expect more or less nooby questions :) I have a server that is only used as a host for virtual machines. OS is windows server 2008 R2 and it is running on 16 CPU and 48 GBs of RAM. On aforementioned server there are 8 VMs, each having 4 CPUs and 4 GBs of RAM. On those VMs we are running some CPU intensive tasks. Each machine has nearly 100% cpu usage. After I noticed slow performance I went to the host machine and started playing with process explorer. It turned out that cpu usage is very low. Also I/O is very low, and of course, memory consumption is high, which is expected. Of course, I don't expect that those 4 virtual cores dedicated to a VM work as fast as real, hardware 4 cores, but still I expected a higher consumption of real hardware. Is this sort of behaviour normal? I see that the most of CPU usage on host machine are marked as interrupts (which I guess is normal) and all those interrupts are passed to only one core (which is strange). Are there out of box optimization that I could perform to finally use all that processing power that is under the hood. My knowledge of virtualization technology is near to embarrassing, so I would be grateful for any links that could enlightened me :) Thanks.

    Read the article

  • I receive email not addressed to me - virus?

    - by Anne
    Every once in a while I receive email (on Gmail) that isn't addressed to me. Gmail puts it in the spam box, because it 'can't verify that it has been sent by [sender]'. The emails, when opened, contain confidential information about deliveries and paid bills (it does look an awful lot like 'real' mail from well-known companies, and it doesn't look like a scam, since the mail is informative - they give information instead of asking for credit card numbers ;-)), and I even got an email from "Facebook" that I requested a password change and that I have to 'click here' to change the password for [email address that isn't mine]. I am not the only addressee, there seems to be a whole list of Gmail addresses beginning with 'a'. The original addressee obviously has some sort of virus, and now I wonder if this could be a risk for me too. Is my email being sent around without my knowing too? I am not the kind of person who randomly clicks on shady links - I am very careful on the internet - but maybe there are other ways of catching viruses? Is there something I should do/check? Thank you for your help!

    Read the article

  • Hiding subfolders from users with Windows Server security

    - by Frans
    Using Windows Server 2008. I would like to allow all users to map to a common network drive and be able to browse it. But, I only want them to be able to see the subfolders they actually have access rights to. Is this doable? Example I have a share with two folders on it; \\domain\share\FolderA \\domain\share\FolderB With three different security groups, I would like to map a network drive for all three to \\domain\share. However, for group1, I want them to only be able to see FolderA, group2 should only see FolderB and group3 should see both. I am not just talking about denying access to the actual folder, which is easy enough, I don't want the user to even be able to see that the folder exists. In other words, when group 1 logs in and do "dir n:\" they should see N:\FolderA When group 2 logs in, they should see N:\FolderB and when group 3 logs in they should see N:\Folder A N:\Folder B My half-baked solution If I completely block access to the root then I can't map a drive to it. I can give everyone the traverse right which then allows the user to map a drive. However, if a member of group1 or group2 tries to go to "N:\" they get an access denied error. If they go to N:\FolderA (for group1) then it works. So, that sort of works, but it would be nicer if the user could actually browse to N:\ and just only see the subfolders they have access to. I am pretty sure I have seen this done but not sure how to do it myself. Any advice would be greatly appreciated.

    Read the article

  • Disconnect from PHP after output generated

    - by Oli
    I have a LEMP stack. Nginx sitting in front of PHP-FPM. Because some of the sites are heavy and there's OPCode caching, PHP is set up so that there are only 5 child processes running. The aim being that each child can deal with any request in less than half-a-second and then move onto the next request. One problem I've found is that if it's a big chunk of HTML that's getting sent out, and the user has a slow connection, that PHP thread stays occupied until they've finished downloading. Because of my current setup, I have a pretty unforgiving timeout inside PHP where the script is killed after 20 seconds. This is to make sure everybody gets a turn but on a slow connection, this can mean the user gets cut off with a 504 Gateway timeout. I was wondering if there was some sort of buffer solution that I could implement within or just behind Nginx that sent the request through and then... well... buffered the content into its own cache and feed that onto the client as and when they could download it. The aim being that the underlying PHP thread can be freed up. What I'm asking for doesn't have to be PHP-specific. Anything that deals with FastCGI, or even any Nginx-upstream might have a similar issue to this.

    Read the article

  • Backing up SQL NetApp Snapshots using TSM

    - by WerkkreW
    In our environment we have a 3 node SQL 2005 Cluster which is on NetApp storage. We are currently using SMSQL (NetApp SnapManager for SQL) to take Snapshot backups of the data. This works great, but due to some audit requirements we are also forced to maintain some copies on tape. We have used NDMP in other places across the enterprise but we do not want to use it in this specific instance. Basically what I need to do is, get the most recent snapshot copy of the databases on tape, via Tivoli Storage Manager (TSM). What I have done is, obtained a basic Windows Server 2003 VM with SnapDrive installed, which is SAN attached and zoned to the NetApp, and I have written a batch file to do the following: Mount the latest __RECENT snapshot lun to the host, using a specific drive letter Perform a TSM based incremental backup Dis-mount the LUN This seems to work fine, except sometimes the LUN's do not mount due to some sort of timeout. Also, due to my limited knowledge of windows batch scripting, I have no way to monitor the success or failure of these backups since I do not know how to send a valid return code back to the TSM scheduling service. Is there a more efficient/elegant way to accomplish this without NDMP?

    Read the article

  • What does Apache's "Require all granted" really do?

    - by John Crawford
    I've just update my Apache server to Apache/2.4.6 which is running under Ubuntu 13.04. I used to have a vhost file that had the following: <Directory "/home/john/development/foobar/web"> AllowOverride All </Directory> But when I ran that I got a "Forbidden. You don't have permission to access /" After doing a little bit of googling I found out that to get my site working again I needed to add the following line "Require all granted" so that my vhost looked like this: <Directory "/home/john/development/foobar/web"> AllowOverride All Require all granted </Directory> I want to know if this is "safe" and does not bring in any security issues. I read on Apache's page that this "mimics the functionality the was previously provided by the 'Allow from all' and 'Deny from all' directives. This provider can take one of two arguments which are 'granted' or 'denied'. The following examples will grant or deny access to all requests." But it didn't say if this was a security issue of some sort or why we now have to do it when in the past you did not have to.

    Read the article

  • CentOS Latency High Troubleshooting

    - by Sarah Weinberger
    I have two CentOS servers connected via a 10 Gb fiber optic cable with a network emulator connected between them. All three units sit on a desk in the lab. There is also a regular 1 Gbit Ethernet cable connected to each of the machines, which provide internet connectivity. When I set the latency to something roughly below 30 ms, all is fine. When the latency gets to 70ms and above, and definitely 130ms, the network layer suspends. For instance, if I set the latency (delay) to 70ms, then launching TeamViewer (or any other application that uses network connectivity) never happens or does not work. There is no timeout message, simply no response. I have to lower to latency back down to zero to see any response and have the box start working. What is the problem and how would I go about fixing it? It seems to me some sort of setting in Linux causes one of the CentOS networking drivers to sit in an infinite loop or something. eth0 is the connection to the internet, all settings are default eth2 is the 10 Gbit fiber optic connection to the other computer with the MTU set to 9600 with all other parameters at default values.

    Read the article

  • Designing a software based load balancer

    - by Kishore pandey
    Hello to all Server fault users, I am new to this website but have constantly been using the mother website, stackover flow. Well to begin with, i would like to design a load balancer for the organization i am working for. As i am very new to this whole, idea about load balancing and networks. I am finding it very difficult to start my project. I did a lot of research on already existing load balancer and found some(HAPROXY,NGINX) that could solve my problems, but the point is, I am still in a dilemma if they could answer the following requirements of mine: The client and server in my architecture are distributed. The load balancer should take care of the firewall. LB server should balance the load among all servers present in WWW cloud. The LB server should have some sort of configuration file, with the help of which it is possible to configure the servers. Heart beat: With the help of which it would be possible to check if any server is down, if any server is down the request should be passed to some other server. Various load balancing algorithms of the incoming requests. Easy error handling. It should be fairly possible to prioritize the incoming requests. Is there any already available load balncer solution on the market that could satisfy these requirements? If not is there any base code available with the help of which i could develop my own load balncer. If not where should i start from scratch? I am practically new to everything. Any help from a load balancer expert is very much appreciated. Thanx a ton in advance. Cheers and regards. Kishore

    Read the article

  • Website hosting from home - IIS6

    - by Paul
    I'm wanting to host a few websites from home, primarily because I'm using some BETA Microsoft software (.NET 4 and EF) and don't want to install it on my production server which is hosted at eukhost.com. Basically, I'm completely new to this sort of thing. So far, here is what I've done: Registered the domain name at namecheap.com (let's call it mydomain.com) Gone to "Nameserver Registration" in the panel and entered my IP address for the NS1 and NS2 records (let's say the IP is 0.0.0.0). Gone to "Domain Name Server Setup" and entered ns1.mydomain.com & ns2.mydomain.com Forwarded requests from port 80 to my internal IP (let's say 192.168.1.254) Created the website in IIS (I'm just testing with a single website so far, so have not created any host header values) Now, if I type in the IP address (http://0.0.0.0) I get the site as expected. However, if I enter http://www.mydomain.com I get an error saying "DNS Error - Cannot find server". I'm aware that there is a service from DynDNS that will automatically change the IP if I have a dynamic address, however my IP has remained static since I installed the ISP (since October) so I don't need this. Is there any way that I can get the DNS to work just by configuring IIS or something in Windows? I don't really want to have to pay for any 3rd party service. Thanks,

    Read the article

< Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >