Search Results

Search found 19928 results on 798 pages for 'multiple constructors'.

Page 637/798 | < Previous Page | 633 634 635 636 637 638 639 640 641 642 643 644  | Next Page >

  • Port forwarding for samba

    - by EternallyGreen
    Alright, here's the setup: Internet - Modem - WRT54G - hubs - winxp workstations & linux smb server. Its basically a home-style distributed internet connection setup, except its at a school. What I want is remote, offsite smb access. I figured I'd need to find out which ports need forwarding and then forward them to the server on the router. I'm told in another question on SF that multiple ports will need forwarding, and it gets somewhat complicated. One of the things I need to know is which ports require forwarding for this, and what complications or vulnerabilities could arise from this. Any additional information you think I should have before doing this would be great. I'm told SMB doesn't support encryption, which is fine. Given I set up authentication/access control, all this means is that once one of my users authenticates and starts downloading data, the unencrypted traffic could be intercepted and read by a MITM, correct? Given that that's the only problem arising from lack of encryption, this is of no concern to me. I suppose that it could also mean a MITM injecting false data into the data stream, eg: user requests file A, MITM intercepts and replaces the contents of file A with some false data. This isn't really an issue either, because my users would know that something was wrong, and its not likely anyone would have incentive to do this anyway. Another thing I've been informed of is Microsoft's poor implementation of SMB, and its crap track record for security. Does this apply if only the client-end is MS? My server is linux.

    Read the article

  • How can a Linux Administrator improve their shell scripting and automation skills?

    - by ewwhite
    In my organization, I work with a group of NOC staff, budding junior engineers and a handful of senior engineers; all with a focus on Linux. One interesting step in the way the company grows talent is that there's a path from the NOC to the senior engineering ranks. Viewing the talent pool as a relative newcomer, I see that there's a split in the skill sets that tends to grow over time... There are engineers who know one or several particular technologies well and are constantly immersed... e.g. MySQL, firewalls, SAN storage, load balancers... There are others who are generalists and can navigate multiple technologies. All learn enough Linux (commands, processes) to do what they need and use on a daily basis. A differentiating factor between some of the staff is how well they embrace scripting, automation and configuration management methodologies. For instance, we have two engineers who do the bulk of Amazon AWS CloudFormation work, and another who handles most of the Puppet infrastructure. Perhaps a quarter of the engineers are adept at BASH shell scripting. Looking at this in the context of the incredibly high demand for DevOps skills in the job market, I'm curious how other organizations foster the development of these skills and grow their internal talent. Scripting doesn't seem like a particularly-teachable concept. How does a sysadmin improve their shell scripting? Is there still a place for engineers who do not/cannot keep up in the DevOps paradigm? Are we simply to assume that some people will be left behind as these technologies evolve? Is that okay?

    Read the article

  • How to direct reverse proxy requests using wildcard vhosts

    - by HonoredMule
    I'm interested in running a reverse proxy with 2-3 virtual machines behind it. Each internal server will run multiple virtual hosts, and rather than manually configuring each individual vhost on the proxy (a variety of vhosts come and go too often for this to be practical), I would like to use something which can employ pattern matching in a sequential order to find the appropriate back-end server. For example: Server 1: *.dev.mysite.com Server 2: *.stage.mysite.com Server 3: *.mysite.com, dev.mysite.com, stage.mysite.com, mysite.com Server 4: * In the above configuration, task.dev.mysite.com would go to Server 1, dev.mysite.com would go to Server 3, yoursite.stage.mysite.com to Server 2, www.mysite.com to Server 3, and yoursite.com to Server 4. I've looked into using Squid, Varnish, and nginx so far. I have my opinions regarding their respective desirability and general suitability, but it's not readily apparent if any of them can handle dynamic server selection in this manner and not require per-vhost configuration. Apache on the other hand can do this handily and simply, but otherwise (aside from being well-known and familiar) seems very poorly suited to the partly-performance-serving task. Performance isn't actually a major concern yet, but it seems foolish to use Apache if another system will perform far better and can also handle the desired 'hands-free' configuration. But so is frequently having to adjust the gateway for all production services and risk network-wide outage...and so also is setting oneself up for longer downtime later if Apache becomes a too-small bottleneck. Which of these (or other) reverse proxies can do it/would do it best? And maybe I should post this as a separate question, but if Apache is the only practical option, how safe/reliable/predictable is apache-mpm-event in apache2.2 (Ubuntu 12.04.1) particularly for a dedicated reverse proxy? As I understand it the Event MPM was declared "safe" as of 2.4 but it's unclear whether reaching stability in 2.4 has any implications for the older (2.2) versions available in official/stable package channels of various distros.

    Read the article

  • Wordpress hacked. Disabled hacked site but bad traffic continues [closed]

    - by tetranz
    Possible Duplicate: My server's been hacked EMERGENCY My Ubuntu 10.04 LTS VPS has been hacked, probably via a WordPress site. I was alerted to it when I noticed the incoming traffic was unusually high. A WordPress site was littered with eval(base64_decode(...)) code in lots of files. My fault, I had some files writeable by www-data which shouldn't have been. I've disabled that site (a2dissite ... and restart Apache). This has reduced it but I am still getting some malware type traffic. My server runs several WordPress and Drupal sites and a home grown PHP site. I have captured traffic with tcpdump and looked at it Wireshark. It's reaching out to the login page of some Joomla sites, trying multiple logins. The traffic stops when I stop Apache. If I a2dissite every site and reload (not restart) Apache the traffic continues. At that point I have no virtual hosts running and no DocumentRoot in my apache2.conf so I don't know how Apache is still running something. I have searched the other sites with grep for likely looking php code with no success. I may have missed it but I haven't found anything suspicious in the Apache logs. I have mod-status running. I haven't really seen anything much there except that someone is still trying to do a POST to the theme page on the disabled WordPress site but they now get a 404. What should I be looking for? Are there any tools or whatever which would give me more info about how Apache is generating that traffic? Thanks

    Read the article

  • Is there any way to detect when nginx has completed a graceful shutdown?

    - by Daniel Vandersluis
    I have a ruby on rails application which is running on passenger and nginx, with one main webserver and multiple application servers. I am trying to update my deployment process in order to minimize (or ideally, remove) any downtime caused by the deployment. The main roadblock right now is that passenger takes some time to restart (ie. reload the application), so in order to get around this, I want to stagger my restarts so that only one app server gets restart at a time. In order to do this without losing any long running passenger processes, I am thinking I need to gracefully shutdown the app server's nginx instance, which will cause it to no longer accept new connections but continue to process the existing ones; as well, HAProxy will detect that the app server is down and route new requests to the other server. However, assuming that there is a long-running process, I am not sure how to detect when the graceful shutdown has completed so that I can start it back up. Since the shutdown is caused by sending a signal (ie. kill -QUIT $( cat /var/run/nginx.pid )), and the kill command will return immediately, I cannot combine commands (ie. kill ... && touch restarted), as the touch command will execute immediately, even if nginx hasn't completed its shutdown. Is there any good way to do this?

    Read the article

  • Intel z77 vs h77 for intensive compiling, gaming [closed]

    - by Bilal Akhtar
    I'm in the market for a desktop motherboard (preferably ATX) that functions well with Intel i7-3770 Ivy Bridge processor at 3.4 GHz with LGA1155 socket. That processor is very fast, and it should handle all my tasks. My question is about the type of motherboard chipset I should choose to accompany it. I plan to use my rig for compiling and developing Debian package and other OS components, web development, occasional Android apps, chroots, VMs, FlightGear, other gaming but nothing serious, and heavy multitasking, all on Ubuntu. I do NOT plan to overclock, and I never will, so that's not a cause of concern for me. That said, I'm down to three chipset choices: Intel H77 Intel Z68 Intel Z77 I'm planning to go for H77 since I don't need any of the new features in Z77. I don't plan to use a second GPU and I will never overclock my CPU/GPU. My question is, will H77 based MoBos handle all my tasks well? Intel advertises that chipset as "everyday computing" but other sites say it's base functionality is the same as Z77. Intel rather advertises Z77 for "serious multitaskers, hardcore gamers and overclocking enthusiasts". But the problem with all Z77 motherboards I've seen is, they're way too expensive and their main feature seems to be overclocking, which won't be useful to me. Will I lose any raw CPU/GPU performance or HDD R/w with the H77 when comparing it to a Z77? Will heat, etc be an issue too? From what I've seen, Z77 motherboards have larger heat sinks when compared to H77 ones. Will that be an issue too, if I go with an H77 motherboard with no heat sinks for the chipset? The CPU will have a fan in both cases, of course. tl;dr When it comes to CPU/GPU performance and HDD r/w, is the Intel H77 chipset slower than the Z77? I don't care about overclocking or multiple GPUs, and for the processor, I'm set on Ivy Bridge i7-3770.

    Read the article

  • Can any iSCSI NAS appliance replicate / clone a LUN to an external drive?

    - by Boden
    I would like to backup using Windows Imaging to some kind of NAS appliance. I believe this will require the NAS to support iSCSI. I would then like the appliance to support the replication of the iSCSI LUN to an external eSATA or USB disk connected directly to the appliance. I've found plenty of NAS appliances that can do iSCSI and replicate to an external drive, but none that I've found thus far can do both at once. That is, the devices can do iSCSI, but then the replication feature doesn't work. The idea here is to backup to an appliance located in a secure office far away from the server room. Offsite backups to external hard drive could be managed from the appliance. The benefits of such a setup would be: 1) very unlikely that fire or random theft would affect both server-room backup and "remote" backup appliance 2) offsite backups could be managed by multiple trusted people without granting access to server room 3) Windows imaging provides poor man's deduplication, so each backup volume can contain a decent backup history. I understand why this would be a non-trivial thing to implement, but I'm wondering if such a thing exists? Preferably a tabletop, low to medium cost device. Alternative solutions welcome. NOTE: I'm backing up very few but very large files, so file replication is not a good option.

    Read the article

  • Can I use Outlook 2010 (beta) with OWA account?

    - by Dan
    One of the new features of Outlook 2010 (beta) is the support for multiple Exchange accounts. I'm wondering if there is any way to use this together with a (different) Outlook Web Access account to also get that email in Outlook. Specifially, in additional to my regular corporate (Exchange) account, I also use another corporate account through OWA. With this second account, the only supported access is through OWA; while POP3 access is available, it is not actually suported. I'm not very familiar with configuring Exchange servers, but in talking to those who are, it sounds like enabling Outlook Web Access is (slightly) different than allowing access from Outlook via HTTP(s). Is that correct? If so, it doesn't really semm quite right as absolute worst-case, one could (theoretically) resort to screen-scraping OWA. Edit: this looks to be about the same as Activesync/OWA Desktop Client? (This doesn't have anything to do with the question, but I'm actually using this second corporate account in Outlook by POP3'ing to Gmail, and then IMAP4 from Gmail to Outlook. Obviously, it would be much nicer to add it as a second Exchange account.).

    Read the article

  • applying rules to CC'd messages in Outlook 2007

    - by Danny Chia
    This is probably a silly question, but here goes: I have two e-mail aliases that forward messages to my main address. I'm trying to create a rule to move all messages that I receive to a specific folder. There is a condition that applies to messages "where my name is in the To or Cc box," but it doesn't let me specify what "my name" is. Not surprisingly, it only affects messages that have not been sent to an alias. So far, I found a solution as follows: I select the condition that applies to messages with specific words in the recipient's address, and I enter my address and aliases as those "words." It's kind of an awkward hack, but it works. Normally, this wouldn't be much of an issue, but I have a "family computer" that is shared among my parents and myself, and I don't want their e-mails and mine to be jumbled together in the Inbox. So my questions are: Is there a solution that is less awkward than the one I used? Alternatively, is there a way to assign multiple e-mail addresses (or aliases) to one account? Thanks!

    Read the article

  • What can I do to lower bandwidth cost on a bandwidth heavy site?

    - by acidzombie24
    The easiest answer is CDN but I'd like to ask. A friend of mine has a server that is used for mirror downloads. He says he is doing about 10TB of bandwidth a month which shocked me (I wonder if he is lying). I seen his site and he has no ads. I suspect he might close his website once he gets the bill. Anyways I was wondering since his CPU/RAM is not being used and his HD usage is around 15gb what he can do to lower cost if he continues this site. I said put up ads but I don't know if ads would cover it I found one CDN which offers $0.070 / GB. 10240gb (10TB) * .07 = $717 a month. That seems a little steep but he is using lots of traffic due to it being a mirror site. Also using a CDN doesnt make sense as he doesn't need multiple servers hosting the files in different areas (which is one reason he isn't using that now). He just needs a big upload pipe Is there something he can do? At the moment he is paying $200 a month on a dedicated server and he is using WAY more bandwidth then he should be using. Side question: Can gz-ing files large already compressed files help? like on (zip, rars, etc)

    Read the article

  • Hard Disk:S.M.A.R.T. Stas BAD, Back up and replace

    - by Nick
    I have an laptop top hard drive I was trying to use to my new media computer. The case is small and can accommodate for 2 2.5" drives, no 3.5" drives. I had been using the hard drive as storage hard drive until now. When I go to install Windows on the hard drive first I'm prompted at the bios of: Hard Disk:S.M.A.R.T. Stas BAD, Back up and replace. And then again in the Windows Setup, informing me that the hard drive is bad. So I did a full format of the drive and tried again. Same error. So I took it out and hooked it back up to my other computer via an Sata usb adapter kit (maybe the cause?). The hard drive is recognized fine and when I scanned it for errors by going: right click -> properties -> tools -> error checking It returns that the hard drive is fine. I have tried 3 different SATA cables and multiple jumpers. When I plugged in my 1.5 tb 3.5" drive the computer that gives me the S.M.A.R.T. error on the 2.5" drive, recognizes it with no problems. Any ideas on why this is happening and how I can fix it?

    Read the article

  • Error with procmail script to use Maildir format

    - by bradlis7
    I have this code in /etc/procmailrc: DROPPRIVS=yes DEFAULT=$HOME/Maildir/ :0 * ? /usr/bin/test -d $DEFAULT || /bin/mkdir $DEFAULT { } :0 E { # Bail out if directory could not be created EXITCODE=127 HOST=bail.out } MAILDIR=$HOME/Maildir/ But, when the directory already exists, sometimes it will send a return email with this error: 554 5.3.0 unknown mailer error 127. The email still gets delivered, mind you, but it sends back an error code to the sending user as well. I fixed this temporarily by commenting out the EXITCODE and HOST lines, but I'd like to know if there is a better solution. I found this block of code in multiple places across the net, but couldn't really find why this error was coming back to me. It seems to happen when I send an email to a local user. Sometimes the user has a .forward file to send it on to other users, sometimes not, but the result has been the same. I also tried removing DROPPRIVS, just in case it was messing up the forwarding, but it did not seem to affect it. Is the line starting with * ? /usr/bin/test a problem? The * signifies a regex, but the ? makes it return an integer value, correct? What is the integer being matched against? Or is it just comparing the integer return value? Do I need a space between the two blocks? Thanks for the help.

    Read the article

  • Cannot access certain URL on my wireless

    - by dehmann
    Problem: On my wireless network at home, there is one URL that I just cannot access with my browser: http://research.microsoft.com/ I have no problems with the Internet connection otherwise. But on that address I just get The connection was reset The connection to the server was reset while the page was loading. from Firefox. I am using a DSL modem (Westell) and Linksys wireless router (using DHCP). When I use my neighbor's wireless connection I can access the microsoft site without a problem. Additional technical details: But with my connection, here is what I get from nslookup. It is weird: It first cannot find the address, but after I look up another address it can find it: $ nslookup research.microsoft.com ;; connection timed out; no servers could be reached $ nslookup google.com Non-authoritative answer: Name: google.com Address: 72.14.204.104 Name: google.com Address: 72.14.204.147 Name: google.com Address: 72.14.204.99 Name: google.com Address: 72.14.204.103 $ nslookup research.microsoft.com Non-authoritative answer: Name: research.microsoft.com Address: 131.107.65.14 But even after nslookup finds it Firefox still cannot access it. Here is what traceroute says: $ traceroute http://research.microsoft.com/ traceroute: Warning: http://research.microsoft.com/ has multiple addresses; using 8.15.7.117 traceroute to http://research.microsoft.com/ (8.15.7.117), 64 hops max, 40 byte packets 1 dslrouter.westell.com (1XX.XXX.X.X) 4.515 ms 2.760 ms 3.072 ms 2 * * * Traceroute just to the IP: $ traceroute 131.107.65.14 traceroute to 131.107.65.14 (131.107.65.14), 64 hops max, 40 byte packets 1 dslrouter.westell.com (1XX.XXX.X.X) 11.912 ms 2.684 ms 2.808 ms 2 * * * Comparison: Traceroute to google.com IP: $ traceroute 72.14.204.99 traceroute to 72.14.204.99 (72.14.204.99), 64 hops max, 40 byte packets 1 dslrouter.westell.com (1XX.XXX.X.X) 6.428 ms 6.981 ms 117.099 ms 2 * * * Any comments / help?

    Read the article

  • Xvnc4 started from xinetd only displays empty gray X screen

    - by Scott Thomason
    Hi. I'm attempting to setup an Ubuntu 10.10 box so that anyone can connect to port 5900 and be greeted by the gdm login manager. To do so, I added a vnc entry in /etc/services and I am starting Xvnc4 using this xinetd config file: service vnc { protocol = tcp socket_type = stream wait = no user = nobody server = /usr/bin/Xvnc server_args = -geometry 1000x700 -depth 24 -broadcast -inetd -once -securitytypes None } This kind of works...I can start multiple sessions all to port 5900, and I get an X screen. The problem is that I only get an empty, gray X screen with no applications started. I know when you run vncserver from the command line it will look to your ~/.vnc/ directory for your passwd and xstartup files, and I think what I want to do is put "gnome-session" into the xstart file. However, which xstartup file? The running user is "nobody" who obviously doesn't have a ~/.vnc/ directory. I tried a /root/.vnc/xstartup file and a ~scott/.vnc/xstartup file and it doesn't look like they were even read. I changed the xinetd vnc service so that it would "strace" Xvnc4. I looked thru all the "open" lines and didn't get a clue as to what file it was trying to read for xstart. Can anyone help? I just want a terminal server where the user is presented with a gdm login screen.

    Read the article

  • How to avoid Windows Genuine Advantage for an XP update?

    - by hlovdal
    I am about to apply updates to a windows xp installation I have not booted in a couple of years. When going to update.microsoft.com, it forced me first to accept an activex installation and now it wants me to install wga: Windows Update To use this latest version of Windows Update, you will need to upgrade some of its components. This version provides you with the following enhancements to our service: <... useless list of "advantages" ... Details Windows Genuine Advantage Validation Tool (KB892130) 1.1 MB , less than 1 minute The Windows Genuine Advantage Validation Tool enables you to verify that your copy of Microsoft Windows is genuine. The tool validates your Windows installation by checking Windows Product Identification and Product Activation status. Update for Windows XP (KB898461) 477 KB , less than 1 minute This update installs a permanent copy of Package Installer for Windows to enable software updates to have a significantly smaller download size. The Package Installer facilitates the install of software updates for Microsoft Windows operating systems and other Microsoft products. After you install this update, you may have to restart your system. Total: 1.5 MB , less than 1 minute I have heard nothing but bad things about wga, and I absolutely do not want it installed on my system (this answer seems to give some options). Searching for "windows xp" at microsoft's web pages brought up this page which says Windows XP Service Pack 3 Network Installation Package for IT Professionals and Developers Brief Description This installation package is intended for IT professionals and developers downloading and installing on multiple computers on a network. If you're updating just one computer, please visit Windows Update at http://update.microsoft.com . ... File Name: WindowsXP-KB936929-SP3-x86-ENU.exe I am currently downloading this file. Will installing this bring my installation up to date with security updates? What about later updates whenever a new problem is discovered, how can i update without using wga?

    Read the article

  • Enabling Samba Shares Across Subnets

    - by John
    I was curious how I could go about setting up SAMBA so that shares could be seen and used across different subnets. We have some Linux devices that are bound to Active Directory and we would like to have them serve SAMBA shares to clients that will reside in a different subnet than what the servers reside in? Is there any way to do this without needing to setup a WINS server or use legacy NetBIOS methods since the majority of our clients are Windows 7, Windows Server 2003, Windows Server 2008, and Macintosh OS X (10.6 or newer)? EDIT Right now, only clients in the same subnet as the SAMBA server can see the shares. Clients outside of the subnet (i.e. the client subnet) cannot see or connect to the share. The error returned is: The specified network name is no longer available. It does not seem to matter if I use IP, FQDN, or NetBIOS name to try and connect to the share with. We have a common Cisco router handling the inter-subnet routing. Everything else seems to work correctly with this network setup and the device can be pinged from multiple subnets. I also do not believe it to be a firewall type of issue since the rules for this segment are rather lax.

    Read the article

  • How to balance the root domain using NS records?

    - by Patrick McCurley
    I have two load balancers that balance incoming traffic across multiple data centers. These work fine. I can test them out by doing an 'nslookup mydomain.com xIP' I have now taken out DNS services with DYN.com to allow me to manage the DNS Zone file so that typing mydomain.com will ask my load balancers what the IP address is to resolve. Step 1 : the NS record for www. I set up A records (glue) for ns1 & ns2, then the corresponding NS record to delegate the DNS lookup to the balancers instead of DYN.com's nameservers. ns1.mydomain.com A [ip address of load balancer 1] ns2.mydomain.com A [ip address of load balancer 1] www.mydomain.com NS ns1.mydomain.com www.mydomain.com NS ns2.mydomain.com All is well - when I type www.mydomain.com, the requests get delegated to my load balancers who provide the IP address of the endpoint and the connect is made successfully. Step 2 : the NS record for root. This is where I run into problems. I need customers to be able to type 'mydomain.com' (without the www) and ALSO get delegated to the load balancers for the IP address. However - of the research I have done, and through the DYN control panel, it seems to be not allowed to provide an NS record for the root - as this overrides the default NS servers. How can i delegate both the root, and the www. to my load balancers?

    Read the article

  • Why would Windows use slower network interface despite route metrics?

    - by tim11g
    On my previous notebook, the Dell/Broadcom wireless adapter had an option to automatically disable wireless when a wired network is connected, so I never dealt with multiple active interfaces. My current system has an Intel wireless adapter, and they apparently haven't figured out how to turn it off when there is a wired connection. Unless I explicitly remember to disable wireless when docked, the connection is active. That shouldn't be a problem (in theory), since the route metric will cause traffic to go over the fastest network (as indicated by the lowest metric in the routing table). Apparently not - I'm running a backup and seeing the throughput at 25Mbps or so (which is consistent with 802.11g) when a perfectly good Gigabit Ethernet interface is also connected. IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.1.254 192.168.1.104 10 0.0.0.0 0.0.0.0 192.168.1.254 192.168.1.109 25 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 Windows has correctly identified the Ethernet interface (.104) and assigned it the lower (preferred) metric. So the Ethernet interface should be used exclusively, right? Why is the Ethernet connection not being used? What other factors are involved? (This is with Windows 7 if it makes a difference)

    Read the article

  • "Run As Administrator" on program right click failing and not launching program

    - by GONeale
    This problem lies within a relatively fresh x64 Windows 7 install ~4 weeks, but is also a problem I have seen on Windows Vista machines (x86 versions). Since the other day, any programs attempted to be launched via right clicking on a shortcut (.lnk)'s context menu and pressing - "Run As Administrator" for instance, in the Quick Launch/Jump List in Windows 7 has failed, screen has not dimmed, no UAC popup. In fact the program does not even load. There is no way around this unless I use the shortcut version from "All Programs" which appears to work, very strange? I have performed no major software installs, nothing out of the ordinary. Has anybody encountered this or know what would be causing it? Here's an example of somebody else experiencing this problem in Vista with no solution: http://www.vistax64.com/vista-general/131918-strange-run-administrator-problem.html and I believe this problem is related, I also cannot right click - "Manage" on my computer): http://windows7forums.com/windows-7-support/5501-run-administrator-broken.html I am running the latest version of Avira AntiVir Virus Scanner and pretty concious of what I download, I don't think it is a virus, nor do I believe it is due to the RC Version of Windows 7, because I have seen the problem across multiple Operating Systems versions. Thanks guys.

    Read the article

  • Will disabling hyperthreading improve performance on our SQL Server install

    - by Sam Saffron
    Related to: Current wisdom on SQL Server and Hyperthreading Recently we upgraded our Windows 2008 R2 database server from an X5470 to a X5560. The theory is both CPUs have very similar performance, if anything the X5560 is slightly faster. However, SQL Server 2008 R2 performance has been pretty bad over the last day or so and CPU usage has been pretty high. Page life expectancy is massive, we are getting almost 100% cache hit for the pages, so memory is not a problem. When I ran: SELECT * FROM sys.dm_os_wait_stats order by signal_wait_time_ms desc I got: wait_type waiting_tasks_count wait_time_ms max_wait_time_ms signal_wait_time_ms ------------------------------------------------------------ -------------------- -------------------- -------------------- -------------------- XE_TIMER_EVENT 115166 2799125790 30165 2799125065 REQUEST_FOR_DEADLOCK_SEARCH 559393 2799053973 5180 2799053973 SOS_SCHEDULER_YIELD 152289883 189948844 960 189756877 CXPACKET 234638389 2383701040 141334 118796827 SLEEP_TASK 170743505 1525669557 1406 76485386 LATCH_EX 97301008 810738519 1107 55093884 LOGMGR_QUEUE 16525384 2798527632 20751319 4083713 WRITELOG 16850119 18328365 1193 2367880 PAGELATCH_EX 13254618 8524515 11263 1670113 ASYNC_NETWORK_IO 23954146 6981220 7110 1475699 (10 row(s) affected) I also ran -- Isolate top waits for server instance since last restart or statistics clear WITH Waits AS ( SELECT wait_type, wait_time_ms / 1000. AS [wait_time_s], 100. * wait_time_ms / SUM(wait_time_ms) OVER() AS [pct], ROW_NUMBER() OVER(ORDER BY wait_time_ms DESC) AS [rn] FROM sys.dm_os_wait_stats WHERE wait_type NOT IN ('CLR_SEMAPHORE','LAZYWRITER_SLEEP','RESOURCE_QUEUE', 'SLEEP_TASK','SLEEP_SYSTEMTASK','SQLTRACE_BUFFER_FLUSH','WAITFOR','LOGMGR_QUEUE', 'CHECKPOINT_QUEUE','REQUEST_FOR_DEADLOCK_SEARCH','XE_TIMER_EVENT','BROKER_TO_FLUSH', 'BROKER_TASK_STOP','CLR_MANUAL_EVENT','CLR_AUTO_EVENT','DISPATCHER_QUEUE_SEMAPHORE', 'FT_IFTS_SCHEDULER_IDLE_WAIT','XE_DISPATCHER_WAIT', 'XE_DISPATCHER_JOIN')) SELECT W1.wait_type, CAST(W1.wait_time_s AS DECIMAL(12, 2)) AS wait_time_s, CAST(W1.pct AS DECIMAL(12, 2)) AS pct, CAST(SUM(W2.pct) AS DECIMAL(12, 2)) AS running_pct FROM Waits AS W1 INNER JOIN Waits AS W2 ON W2.rn <= W1.rn GROUP BY W1.rn, W1.wait_type, W1.wait_time_s, W1.pct HAVING SUM(W2.pct) - W1.pct < 95; -- percentage threshold And got wait_type wait_time_s pct running_pct CXPACKET 554821.66 65.82 65.82 LATCH_EX 184123.16 21.84 87.66 SOS_SCHEDULER_YIELD 37541.17 4.45 92.11 PAGEIOLATCH_SH 19018.53 2.26 94.37 FT_IFTSHC_MUTEX 14306.05 1.70 96.07 That shows huge amounts of time synchronizing queries involving parallelism (high CXPACKET). Additionally, anecdotally many of these problem queries are being executed on multiple cores (we have no MAXDOP hints anywhere in our code) The server has not been under load for more than a day or so. We are experiencing a large amount of variance with query executions, typically many queries appear to be slower that they were on our previous DB server and CPU is really high. Will disabling Hyperthreading help at reducing our CPU usage and increase throughput?

    Read the article

  • How do I give a user permisson to view scheduled task history on Server 2008?

    - by pplrppl
    I've set up a scheduled task on Server 2008 and want to run it as a user other than the local administrator. So I choose a domain account created specifically for this task and once I've closed the scheduled task and entered a valid password I want to run it and look a the history tab for this task. On the history tab I see: The user account does not have permission to view task history on this computer. What permission must I grant to allow this user to view history and/or how can I view the history as a local admin/domain admin instead of the user the job will run under? Steps to hopefully reproduce: I'm starting from the "Server Manager" - Configuration - Task Scheduler - Task Scheduler Library. IN the top middle pane I have tasks that have been running for several months as the local administrator. In the process of troubleshooting another issue I changed the task to run as Domain\ABCuser. Later in the process of troubleshooting I tried unchecking "run with highest privileges". I have since changed the job back to SERVERNAME\Administrator but the history tab still showed the permissions message. I may have had multiple Server Manager windows open. After Closing the Server Manager and being sure no other management consoles were open I was able to reopen the Server Manager and see the History tab without error. At this point the task works properly but should I ever need to run a task as a task specific account I'd like to know how to make the history viewable. It may be something as simple as closing all Server Manger windows to allow cached permissions to be refreshed the next time you open the Manager but at this point I don't know exactly what the solution is.

    Read the article

  • Multi-select menu in bash script

    - by am2605
    I'm a bash newbie but I would like to create a script in which I'd like to allow the user to select multiple options from a list of options. Essentially what I would like is something similar to the example below: #!/bin/bash OPTIONS="Hello Quit" select opt in $OPTIONS; do if [ "$opt" = "Quit" ]; then echo done exit elif [ "$opt" = "Hello" ]; then echo Hello World else clear echo bad option fi done (sourced from http://www.faqs.org/docs/Linux-HOWTO/Bash-Prog-Intro-HOWTO.html#ss9.1) However my script would have more options, and I'd like to allow multiples to be selected. So somethig like this: 1) Option 1 2) Option 2 3) Option 3 4) Option 4 5) Done Having feedback on the ones they have selected would also be great, eg plus signs next to ones they ahve already selected. Eg if you select "1" I'd like to page to clear and reprint: 1) Option 1 + 2) Option 2 3) Option 3 4) Option 4 5) Done Then if you select "3": 1) Option 1 + 2) Option 2 3) Option 3 + 4) Option 4 5) Done Also, if they again selected (1) I'd like it to "deselect" the option: 1) Option 1 2) Option 2 3) Option 3 + 4) Option 4 5) Done And finally when Done is pressed I'd like a list of the ones that were selected to be displayed before the program exits, eg if the current state is: 1) Option 1 2) Option 2 + 3) Option 3 + 4) Option 4 + 5) Done Pressing 5 should print: Option 2, Option 3, Option 4 and the script terminate. So my question - is this possible in bash, and if so is anyone able to provide a code sample? Any advice would be much appreciated.

    Read the article

  • Changing IP address in IIS for SharePoint site results in Directory listing error

    - by Dan
    I have a server here that has 2 roles. One is Exchange 2007 and the other is MOSS 2007. In IIS i have a site, go.domain.com which has our OWA. The other is internal.domain.com which is the MOSS site. I have given the NIC local IPs and each site is using host headers. The GO site has an SSL cert from NetSol, and the MOSS site has a self signed. Right now going to either shows the NetSol site, which browsers complain about when going to the internal.domain.com site, obviously, since they are on the same IP in IIS. Both sites have always run off the original IP of 10.0.0.3 in IIS. When i added the second IP to the NIC, (10.0.0.6) and changed the Sharepoint site in IIS to use this for http and https access, I now get this message in a browser when trying to connect. Directory Listing Denied This Virtual Directory does not allow contents to be listed. Changing the IP back to 10.0.0.3 and the internal site is back up. What am I missing here? Do i need to fool around with Alternate Access Mappings in Central Admin? Am i completely missing the point with multiple SSL certs and host headers?

    Read the article

  • Best option for storage clustering

    - by sam
    I'm working on an application that requires a large amount of storage space and I want to handle storage 'in-house' (Much cheaper than, say, S3) so we will have multiple servers (Initially 4) with large amounts of storage (6TB each). The storage will need to be very flexible and configurable, each piece of data should be replicated on at least 2 servers and must be easily readable/writable from ether an API of a UNIX device/file/folder like a normal drive, I don't mind which. We must also be able to easily offload content to our HTTP CDN (Edgecast), it doesn't need to have built in HTTP support but if it doesn't I'm going to have to write something to get the files onto HTTP so they can be pulled by the CDN. I've looked at a lot of solutions including Eucalyptus Walrus OpenStack Object Storage MogileFS and some others which I can't remember All the servers will be running RHEL 6, they have 4x1.5TB drives which will be RAID1'd into a single partition. All the servers have 1GB/s connections between them and 100MB/s connections to the internet with unlimited bandwidth. They have 2x2.66ghz processors. I understand there isn't a single, perfect answer but it would be nice to get some pointers.

    Read the article

  • Calendar sharing between unrelated Domains

    - by vlannoob
    I have a 'request' from one of the 'big guys' that has me scratching my head a bit. He is one of our Executives that also is a board member of several other organistations, so he floats around between 4 different unrelated companies, each with their own domains, Exchange setup etc. He has domain accounts in each organisation. He has an iPad with multiple Exchange accounts so he can see all his calendars which works Ok for him - Apple calendaring bugs/flaws aside. What he wants is the ability for 'reception staff' at each organisation to 'see' all his calendars as they are booking things for him in their respective organisations calendars without it conflicting with bookings made in his calendars by other organisations......you with me?? So for example: Company A books a meeting into his Comapny A calendar at 9am Monday and Company B books him a meeting in his Company B Calendar at 9:15am Monday on the other side of town and of course Company C has him booked in all day Monday on their Company C calendar. He gets all those on his iPad but he would like either a 'global' calendar all can see and book into or the ability for receptionists at Company's A,B and C to see all the Calendars to avoid these kind of conflicts. I told him to 'go away' straight off the bat, I don't control anything to do with the other companies or know their infrastructure. And quite frankly I don't want any part of it...but he's whining and he's high enough up the food chain that I can't ignore him forever. I'm open to suggestions. Is there any third party software/services that can facilitiate this kind of setup? I really don't want to be creating users in my AD structure to people not ion our organisation so they can get access to his calendar and I am sure there sysadmins feel the same. As usual - any advice is greatly appreciated ;)

    Read the article

< Previous Page | 633 634 635 636 637 638 639 640 641 642 643 644  | Next Page >