Search Results

Search found 7661 results on 307 pages for 'team leader'.

Page 258/307 | < Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >

  • Windows Domain Chaos - Any Solving Approach

    - by Chake
    we are running an old Window 2003 Server as Domain Controller (DC2003). To safely migrate to Windows 2008 R2 we added a 2008 R2 (DC2008R2) to the domain as domain controller (adprep etc.). After dcpromo on DC2008R2 everything seemed to be ok. The new DC appeared under the "Domain Controlelrs" node. It wasn't checked at this time, if DC2008R2 can REALLY act as domain controller. Later we tried to shutdown DC2003 and ran into a total mess with non functional Exchange and Team Foundation Services. After that I got the job to fix... First i thought it could be an Problem with DC2008R2. So I removed it as Domain Controller and installed a new Windows 2008 R8 Server DC2008R2-2. I ran into similar Problems. I tried a bunch of stuff, but nothign helped. I won't list it, maybe I made an mistake, so I'm willing to redo it with your suggestions. To have a starting point I tried the best practise analyser whicht ended up with 24 "Compatible" and 26 "Not Compatible" tests. From these 26 tests 19 read the same. (I'm translating from german, so that may to be the exact wording) Problem: Using the Best Practise Analyser for Active Directory Domain Services (Active Directory Domain Services Best Practices Analyzer, AD DS BPA) no data can be be gathered using the name of the forest and the domain controller DC2008R2-2. I appreciate any suggestions, this really bothers me.

    Read the article

  • Better logging for cronjob output using /usr/bin/logger

    - by Stefan Lasiewski
    I am looking for a better way to log cronjobs. Most cronjobs tend to spam email or the console, get ignored, or create yet another logfile. In this case, I have a Nagios NSCA script which sends data to a central Nagios sever. This send_nsca script also prints a single status line to STDOUT, indicating success or failure. 0 * * * * root /usr/local/nagios/sbin/nsca_check_disk This emails the following message to root@localhost, which is then forwarded to my team of sysadmins. Spam. forwarded nsca_check_disk: 1 data packet(s) sent to host successfully. I'm looking for a log method which: Doesn't spam the messages to email or the console Don't create yet another krufty logfile which requires cleanup months or years later. Capture the log information somewhere, so it can be viewed later if desired. Works on most unixes Fits into an existing log infrastructure. Uses common syslog conventions like 'facility' Some of these are third party scripts, and don't always do logging internally. UPDATE 2010-04-30 In the process of writing this question, I think I have answered myself. So I'll answer myself "Jeopardy-style". Is there any problem with this method? The following will send any Cron output to /usr/bin//logger, which will send to syslog, with a 'tag' of 'nsca_check_disk'. Syslog handles it from there. My systems (CentOS and FreeBSD) already handle log rotation. */5 * * * * root /usr/local/nagios/sbin/nsca_check_disk 2>&1 |/usr/bin/logger -t nsca_check_disk /var/log/messages now has one additional message which says this: Apr 29, 17:40:00 192.168.6.19 nsca_check_disk: 1 data packet(s) sent to host successfully. I like /usr/bin/logger , because it works well with an existing syslog configuration and infrastructure, and is included with most Unix distros. Most *nix distributions already do logrotation, and do it well.

    Read the article

  • IIS 7 rewriting subdomain to point at a specific port.

    - by Tommy Jakobsen
    Having installed Team Foundation Server 2010 on Windows Server 2008, I need an easy URL for our developers to access their repositories. The default URL for the TFS repositories is http://localhost:8080/tfs Now I want the subdomain domain tfs.server.domain.com to point at http://localhost:8080/tfs. And when you write access tfs.server.domain.com/repos_name it should redirect to http://localhost:8080/tfs/repos_name. How can I do this in IIS 7? I already tried using the following rule, but it does not work. I get a 404. <rewrite> <globalRules> <rule name="TFS" stopProcessing="true"> <match url="^(?:tfs/)(.*)" /> <conditions> <add input="{HTTP_HOST}" pattern="^tfs.server.domain.com$" /> </conditions> <action type="Rewrite" url="http://localhost:8080/tfs/{R:1}" /> </rule> </globalRules> </rewrite>

    Read the article

  • Recommendation for robust, customizable, open source, Java servlet-based forum software?

    - by Erik Hermansen
    There is a lot of forum software out there, but it seems to me that a lot of the popular choices are PHP-based. And for my project, I'd like something based on Java servlets so my team can make customizations to it. Another important feature is that I can completely change the pages to hide unwanted elements without too much work. So I'm looking either for a template system or easily editable scripts (i.e. JSPs) that have a clean view separation. Just having skin changes or CSS customization is not enough. I understand that if I have open source, I can change anything I want, but my point is that it should be easy and not requiring mastery of a complex code base. Finally, I want something that has been around for at least a year and deployed on some high-traffic sites. Clustering support (one database, multiple web servers) is highly desirable. Up-time is crucial since I have an SLA to support. What do you think?

    Read the article

  • Most scalable way of serving a small set of static HTTP content

    - by Ekevoo
    The story: Hi guys. I'm among the people responsible for serving the results of the most anticipated (by number of people participating) annual entrance exam in my state. As such, when our results are published, the interest is overwhelming. In the past we delegated the responsibility of serving the results to the media, but that spoils a little the officialness of these results. This year we went with a little (long overdue) experiment of using lighttpd instead of Apache as well as other physical network optimizations I wasn't directly involved with. The results were very satisfactory. The server didn't choke even once, nor we saw any of the usual Twitter complaints on unavailability and/or slowness that were previously common. However, because we still delegated the first publication of the results to the media I'm still not 100% sure we can handle the load of actually publishing the results first. The question: Now because these files are like 14MB in total and a true lightweight Linux distribution isn't that big either, I'm thinking: what if next year we run full RAMdrive? Is there any? Is that useful? Is that worth it for a team that uses Debian almost exclusively? Are there other optimizations that I should be focusing on instead?

    Read the article

  • Copy UNC network path (not drive letter) for paths on mapped drives from Windows Explorer

    - by Ernest Mueller
    I frequently want to share network paths to files with other folks on my team via email or chat. We have a lot of mapped drives here, both ones we set up ourselves and ones set up by our IT overlords. What I'd like to be able to do is to copy the full real path (not the drive letter) from Windows Explorer to send to folks. Example: I have a file in my "Q:" drive, \cartman\users\emueller, I want to send a link to file foo.doc to everyone. When I copy the file path (shift+right click, "copy as path") it gets the file name "Q:\foo.doc". This is unhelpful to others, who would like to see \cartman\users\emueller\foo.doc, obviously. In Explorer it clearly knows it - in the address bar I see "Computer - emueller (\cartman\users) (Q:) -". Is there a way to say "hey man copy that path as text with the \cartman\users\emueller not the Q: in it?" I know I could just set up mapped network locations instead of the mapped drives for the ones that I set up personally and avoid this problem, but most of the mapped drives like the "users" share come from our IT policy. I could just make a separate network location and then ignore my Q: drive but that's inconvenient (and they do it so they can move accounts across servers). Sure my emailed path might eventually break because I'm losing the drive letter indirection but that's OK with me.

    Read the article

  • How to replicate a windows servers (IIS,Files,ConfigurationState)?

    - by Geo
    Maybe a better question is: What is the closest competitor for DoubleTake? I am looking to replicate a windows production server in case it fails have a immediate backup. Any idead? NOTE 1: I forget to add that this server is on the EC2 Amazon Cloud. NOTE 2: The main situation we have is recreating the configuration settings like IIS, FTP Server, SQL Server, SVN Server. NOTE 3: So far I have been giving three options as answers for my original question: AppAssurance -- After talking to their sales team they do not support Amazon as cloud provider. Basically there is a technical need to be able to reboot from a disk or similar media. So ESX Virtual machine environment will work, but not the EC2. Acronis -- which works as a backup in ghost style. This will work for other type of scenarios. Use the Amazon EC2 API -- This option is ideal, but only works if you are developing a cloud application rather than hosting a regular application in a cloud scenario. This means that I am still looking for the answer. Any other ideas.

    Read the article

  • Copy UNC network path (not drive letter) for paths on mapped drives from Windows Explorer

    - by Ernest Mueller
    I frequently want to share network paths to files with other folks on my team via email or chat. We have a lot of mapped drives here, both ones we set up ourselves and ones set up by our IT overlords. What I'd like to be able to do is to copy the full real path (not the drive letter) from Windows Explorer to send to folks. Example: I have a file in my "Q:" drive, \\cartman\users\emueller, and I want to send a link to the file foo.doc therein to coworkers. When I copy the file path (shift+right click, "copy as path") it gets the file name "Q:\foo.doc". This is unhelpful to others, who would need to see \\cartman\users\emueller\foo.doc to be able to consume the link. In Explorer it clearly knows it - in the address bar I see "Computer - emueller (\\cartman\users) (Q:) -". Is there a way to say "hey man copy that path as text with the \\cartman\users\emueller not the Q: in it?" I know I could just set up mapped network locations instead of the mapped drives for the ones that I set up personally and avoid this problem, but most of the mapped drives like the "users" share come from our IT policy. I could just make a separate network location and then ignore my Q: drive but that's inconvenient (and they do it so they can move accounts across servers). Sure my emailed path might eventually break because I'm losing the drive letter indirection but that's OK with me.

    Read the article

  • Application runs fine manually but fails as a scheduled task

    - by user42540
    I wasn't sure if this should go here or on stackoverflow. I have an application that loads some files from a network share (the input folder), extracts certain data from them and saves new files (zips them with SharpZLib) on a different network share (output folder). This application runs fine when you open it directly, but when it is set to a scheduled task, it fails in numerous places. This application is scheduled on a Win 2003 server. Let me say right off the bat, the scheduled task is set to use the same login account that I am currently logged in with, so it's not because it's using the LocalSystem account. Something else is going on here. Originally, the application was assigning a drive letter to the input folder using WNetGetConnectionA(). I don't remember why this was done, someone else on our team did that and she's gone now. I think there was some issue with using the WinZip command line with a UNC path. I switched from the WinZip command line utility to using SharpZLib because there were other issues with using the WinZip command line. Anyway, the application failed when trying to assign a drive letter with the error "connection already established." That wasn't true and even after trying WNetCancelConnection(), it still didn't work. Then I decided to just map the drive manually on the server. Then when the app calls Directory.Exists(inputFolderPath) it returns false, even though it does exist. So, for whatever reason, I cannot read this directory from within the application. I can manually navigate to this folder in Windows Explorer and open files. The app log file shows that the user executing it on the schedule is the user I expect, not LocalSystem. Any ideas?

    Read the article

  • WSUS KB978338 Chain of Supersession Incorrect?

    - by Kasius
    The chain appears to be KB978338 to KB978886 to KB2563894 to KB2588516 (newest). All four of these updates are approved on our WSUS server. KB978338 is listing as Not Applicable on all machines, because it has been superseded. This is the behavior I would expect. However, our security office is reporting that KB978338 should still be installed on all machines because its actual effect is not replicated by any of the updates that follow it. Here is the analysis I was sent: KB978886 applies to Vista SP1 only. The rollout of SP2 did not address the ISATAP vulnerability and reintroduces it. KB2563894 only updates two files (Tcpip.sys and Tcpipreg.sys). It does not update the 12 other affected ISATAP, UDP, and NUD .sys and .dll files. (MS11-064) KB2588516 addresses malformed continuous UDP packet overflow. But does not address the ISATAP related NUD and TCP .sys and .dll files. (MS11-083) So yes, many IP vulnerabilities. But each KB addresses specific issues that do not cross over to other KBs. We can install KB978338 by manually running the .MSU file, but we aren't certain if that will overwrite the couple files that get updated by later patches since we would be installing the patch out of order. Is the above analysis correct? Is the chain of supersession incorrectly defined? If it is, what is the proper way to report it so that it can be changed by the correct Microsoft team? We are currently using 32-bit and 64-bit installations of Vista SP2. Note: I should mention that I posted this on Technet as well. I will keep this up-to-date with any information I get on there.

    Read the article

  • Number of routers in small community lock up and require reboot.

    - by Anthony Hiscox
    I live in a small town which has one primary ISP. Lately I have noticed that a number of wireless routers have been locking up and requiring a reboot before allowing any connections. This has affected two of my routers, my work router, and a few others. In all cases wired continued to function as usual. Often wireless clients can see the SSID but simply won't connect. I can only think of a few possibilities and was hoping someone here might be able to point me in the right direction: Our ISP is well known to be flaky, something they are doing is causing this, what that might be I have no clue it as seems to affect the wireless only. There's a power issue in town, given our remote location and reputation for crap electrical, this seems reasonable. Only one router was plugged in to a UPS, and I'm not sure of the quality. There is some bug in all the different firmware for every one of these routers (all different). That doesn't seem reasonable, unless; it's an unknown (or known) exploit or DoS of some sort being launched by a massive team of ninjas hell bent on forcing us all to be tethered to our walls by ethernet cables or; it's just been a coincidence and I'm just paranoid (this has some weight, I mean read 4 again). Anyone else experience similar issues and have some tips?

    Read the article

  • mplayer (mplayerhq.hu) repeats ending audio frames

    - by kamikatze
    mplayer (from mplayerhq.hu) on windows repeats the last few audio frames upon exit. When the video ends, before you can see Exiting... (End of file) in the command prompt, you will hear the last 1/2 second or so of the audio track again. This behavior is the same for multiple containers/codecs/soundcards Vista or Windows 7. Is there a workaround for this? My playback specs: MPlayer Sherpya-MT-SVN-r31027-4.2.5 (C) 2000-2010 MPlayer Team 150 audio & 343 video codecs Playing splash_final.wmv. ASF file format detected. [asfheader] Audio stream found, -aid 1 [asfheader] Video stream found, -vid 2 VIDEO: [WMV3] 1280x720 24bpp 1000.000 fps 6291.5 kbps (768.0 kbyte/s) ========================================================================== Opening video decoder: [dmo] DMO video codecs DMO dll supports VO Optimizations 0 1 DMO dll might use previous sample when requested Decoder supports the following formats: YV12 YUY2 UYVY YVYU RGB8 [..] Decoder is capable of YUV output (flags 0x1b) Movie-Aspect is undefined - no prescaling applied. VO: [directx] 1280x720 = 1280x720 Planar YV12 Selected video codec: [wmv9dmo] vfm: dmo (Windows Media Video 9 DMO) ========================================================================== ========================================================================== Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders AUDIO: 44100 Hz, 2 ch, s16le, 329.8 kbit/23.37% (ratio: 41221-176400) Selected audio codec: [ffwmav2] afm: ffmpeg (DivX audio v2 (FFmpeg)) ========================================================================== AO: [dsound] 44100Hz 2ch s16le (2 bytes per sample) Starting playback...

    Read the article

  • Splunk is fantastically expensive: What are the alternatives?

    - by samsmith
    This has been discussed, but it has been several months, so it may be time to revisit it: Earlier discussion RE Splunk alternatives For the record, Splunk rocks. But the pricing is simply beyond what we can consider (When I spoke with Splunk today, the cost for a system to index 5gb/day of data is over $30,000.) That is more than we spend on SQL Server (by a large multiple), more than we spend on a rack of servers (by a multiple), etc. etc. The splunk sales team is correct (that for $30K we get more value and functionality than if we spend the same building our own system), but it doesn't matter. The splunk cost is simply too high (by a multiple). Soooooo, we are looking around! Is anyone out there building a splunk like system? Our basic need: Able to listen for syslog messages on multiple udp ports Able to index the incoming data in an async way Some kind of search engine Some kind of UI An API to the search engine (to embed in our console) We currently need to index 3-5gb/day, but need to be able to scale to 10gb/day or more. We do not need a lot of history (30 days is fine). We use Windows 2008 and 2003 servers. Thanks for your thoughts!

    Read the article

  • Standards for documenting/designing infrastructure

    - by Paul
    We have a moderately complex solution for which we need to construct a production environment. There are around a dozen components (and here I'm using a definition of "component" which means "can fail independently of other components" - e.g. an Apache server, a Weblogic web app, an ftp server, an ejabberd server, etc). There are a number of weblogic web apps - and one thing we need to decide is how many weblogic containers to run these web apps in. The system needs to be highly available, and communications in and out of the system are typically secured by SSL Our datacentre team will handle things like VLAN design, racking, server specification and build. So the kinds of decisions we still need to make are: How to map components to physical servers (and weblogic containers) Identify all communication paths, ensure all are either resilient or there's an "upstream" comms path that is resilient, and failover of that depends on all single-points of failure "downstream". Decide where to terminate SSL (on load balancers, or on Apache servers, for instance). My question isn't really about how to make the decisions, but whether there are any standards for documenting (especially in diagrams) the design questions and the design decisions. It seems odd, for instance, that Visio doesn't have a template for something like this - it has templates for more physical layout, and for more logical /software architecture diagrams. So right now I'm using a basic Visio diagram to represent each component, the commms between them with plans to augment this with hostnames, ports, whether each comms link is resilient etc, etc. This all feels like something that must been done many times before. Are there standards for documenting this?

    Read the article

  • SharePoint 2010 Enterprise wiki - [New page] missing

    - by icelava
    I am trying to ramp up knowledge on SharePoint deployment and usage (never did before), due to a direction to use SharePoint 2010 as a repository platform (wiki format) for our customer's infrastructure documentation. In my test virtual server, a new site of Enterprise wiki template was setup. Went into Site Actions Manage Site Features to activate Wiki Page Home Page. The default sub-web then went from /Pages to /SitePages and looks like the default Team template. The odd thing is the Site Actions is missing the New Page option. My colleague does not understand why this is the case, as it ought to be there. The original /Pages sub-web does have the option. What conditions are in play that influences the appearance of that option? UPDATE Another phenomenon observed is in the Site Actions View All Site Content view, the wiki document libraries listed in the grid will have their hyperlink (e.g. "Site Pages") lead straight to the direct default page. It would not show its own table listing of pages under that document library, unlike the original Pages document library, which expectedly show up as a listing. I wonder if this hints to any problems.

    Read the article

  • Scaling databases with cheap SSD hard drives

    - by Dennis Kashkin
    Hey guys! I hope that many of you are working with high traffic database-driven websites, and chances are that your main scalability issues are in the database. I noticed a couple of things lately: Most large databases require a team of DBAs in order to scale. They constantly struggle with limitations of hard drives and end up with very expensive solutions (SANs or large RAIDs, frequent maintenance windows for defragging and repartitioning, etc.) The actual annual cost of maintaining such databases is in $100K-$1M range which is too steep for me :) Finally, we got several companies like Intel, Samsung, FusionIO, etc. that just started selling extremely fast yet affordable SSD hard drives based on SLC Flash technology. These drives are 100 times faster in random read/writes than the best spinning hard drives on the market (up to 50,000 random writes per second). Their seek time is pretty much zero, so the cost of random I/O is the same as sequential I/O, which is awesome for databases. These SSD drives cost around $10-$20 per gigabyte, and they are relatively small (64GB). So, there seems to be an opportunity to avoid the HUGE costs of scaling databases the traditional way by simply building a big enough RAID 5 array of SSD drives (which would cost only a few thousand dollars). Then we don't care if the database file is fragmented, and we can afford 100 times more disk writes per second without having to spread the database across 100 spindles. . Is anybody else interested in this? I've been testing a few SSD drives and can share my results. If anybody on this site has already solved their I/O bottleneck with SSDs, I would love to hear your war stories! PS. I know that there are plenty of expensive solutions out there that help with scalability, for example the time proven RAM-based SANs. I want to be clear that even $50K is too expensive for my project. I have to find a solution that costs no more than $10K and does not take much time to implement.

    Read the article

  • Outlook locking network connection/session?

    - by HaydnWVN
    Scenario: We have an 'automatic orders' machine sat in the corner running XP with Outlook 2003. Its job is to check for new emails on a specific account, when it encounters one it checks the e-mail body for specific wording to determine which customer it is from (using a macro), then it checks the attachment for specific order codes before parsing the attachment to create a .csv file (which is then e-mailed onto one of the sales team) before importing the .csv into our bespoke ERP/Sales Order system to create an order. Problem: Periodically the machine will have symptoms of a lost network connection (unable to connect to any network source). Sometimes after several days, sometimes over a week. Volume of emails/orders processed does not seem to be linked. Additional info: The machines .pst is stored on a mapped network location. The .csv created is stored on a mapped network location. This is a workgroup, not a domain. All network drives are Samba shares from an Ubuntu fileserver. Our bespoke system runs from a database (MySQL) Ubuntu server. Our troubleshooting so far: I have switched machines (previous was Win2000) with the same symptoms. Restarting the machine FIXES the problem. Closing Outlook and then end tasking an Outlook.exe background process FIXES the problem. If you close Outlook, without end taking the background process, outlook will not reopen (saying it cannot find the pst file & it will not open any network location). Does Outlook have some kind of 'max session' linking it to network activity that is not closing after a mail request? Could Auto-archive be causing this? Is there a tool to check/display what each outlook.exe process is doing? Have not found many ways to troubleshoot this yet, as it is so infrequent...

    Read the article

  • Running Flash on a headless Solaris box

    - by Marty Pitt
    Our build server is a Solaris box, and I'm trying to run a suite of FlexUnit tests as part of the automated build process. This works by compiling a swf movie with a suite of automated unit tests. The build script launches this movie, which automatically begins running the tests. Results of each test are sent back to the launching script across a port, and written out to a local xml file. Once the tests are completed, the movie closes down, and the build script interrogates the results to see if all the tests passed. The FlexUnit wiki provides information about how to to acheive this on a Unix server, by using Xvnc to provide a virtual space for the flash movie to run its tests in. I've provided this information through to our sys admin team, (along with the link to the article), and I've been told that because this is a Solaris box, we can't use that approach - Xvnc isn't supported on Solaris. Unfortunately, I know very little about servers, *nix vs Solaris, or Xvnc. Can someone please provide some advice about how we can achieve the same outcome on a Solaris box?

    Read the article

  • SVN hangs on commit - any suggestions for troubleshooting?

    - by Richard Beier
    We're having a problem with SVN... Subversion clients such as TortoiseSVN hang when we commit any more than a few files at a time to our server. Everything appears to actually be committed successfully to the repository; but the client hangs after all the data has been transmitted. We're using version 1.4.4 of the SVN server. We use the svn:// protocol rather than http to connect. We've reproduced this problem with several clients: TortoiseSVN (1.6.10), AnkhSVN (2.1), and the Silk command-line client (1.6.12). This is happening for everyone on the team, though some people seem to be more affected than others. If someone commits only a few files, it often works; but with more than half a dozen files, it usually hangs. Does anyone have troubleshooting suggestions? This has been happening sporadically for a while, but it's become pretty consistent lately. We've been working around the issue by killing the hung SVN client, doing "svn cleanup", and then doing "svn up"; but sometimes that causes tree conflicts. Another workaround is to blow away the workspace and check it out again after every commit; but of course that's pretty annoying. Are there any diagnostics that could help us troubleshoot this? We're considering upgrading to SVN 1.6 server, and installing the server on a new machine; but we're wondering if there's an easier solution. Thanks for your help, Richard

    Read the article

  • How I can fix the "Display Driver has stopped responding and has recovered"?

    - by Vitor Rangel
    I'm using a GeForce GTX 580, with Windows 7 64-bit. The driver version of the GTX is 301.42. The problem happens after a few minutes, when I'm playing specifc games. It won't happen in all games - And I don't have any idea why these games doesn't work. The games that doesn't work: Battlefield 3, Civilization V, Sniper Elite V2. The games that work: Mass Effect 3, Crysis 2, Team Fortress 2, Left 4 Dead 2, Skyrim, L.A. Noire. As you can see, it's not a problem of "The games that demand more stop working". I've tryed updating the driver of the graphics-card, the bios of the motherboard, even formated my computer (It was needing it) and instaled every driver in the last version possible. This problem happens since I bought my graphics-card, 6 months ago. After a few minutes, from 10 to 20, the pixels in the monitor become strange, with random colors and effects, like it was broken. Then, everything goes black, and the message appears "Display Driver has stopped responding and has recovered". After that, I need to close the game and start again. I am not overclocking, and my temperature never goes higher than 70ºC.

    Read the article

  • git : The remote end hung up unexpectedly - too many simultaneous users?

    - by Pritam Barhate
    I asked this first on StackOverflow and I was suggested that I should ask it here: We have a self hosted git server (Gitolite) on a VPS account (CPU:2.68GHz RAM:1824MB). This same VPS is also used to publish our underdevelopment web apps for client demos. (Very little traffic). so the main use of the server is as a Git Server Only. This git server is accessed by a team of 30-40 people for various projects. Our problem is that during the day when 6-7 people are trying to access the server (sometimes same repo) we get frequent error message: ssh: connect to host xxx.xxx.xx.xx port 22: Bad file number fatal: The remote end hung up unexpectedly After trying for 10-15 minutes it generally succeeds. During early mornings and late nights when there are only 1-2 people, git commands work with 100% success rate. Also I would like to note that if I access the other file hosted on the server through HTTP it works fine. I found a couple of questions on StackOverflow and on other sites regarding this. But most of the people point towards SSH key set up or conflicts between Msysgit and Cygns SSH. However I don't think this is the problem in our case as we get this behavior on Windows (using msysgit only) as well as Mac Machines. Also if it was SSH configuration issue then it shouldn't work at all. But in our case it works after 10-15 minutes. I think in our case it might be too many simultaneous connections to same server (or same repo) or something like that. Does there exists a setting or a conf file that needs to modified to solve this problem? Please help me solve this problem or point me in the right direction. Thanks in advance. Pritam.

    Read the article

  • Why can't I see all of the client certificates available when I visit my web site locally on Windows 7 IIS 7?

    - by Jay
    My team has recently moved to Windows 7 for our developer machines. We are attempting to configure IIS for application testing. Our application requires SSL and client certificates in order to authenticate. What I've done: I have configured IIS to require SSL and require (and tried accept) certificates under SSL Settings. I have created the https binding and set it to the proper server certificate. I've installed all the root and intermediate chain certificates for the soft certificates properly in current user and local machine stores. The problem When I browse to the web site, the SSL connection is established and I am prompted to choose a certificate. The issue is that the certificate is one that is created by my company that would be invalid for use in the application. I am not given the soft certificates that I have installed using MMC and IE. We are able to utilize the soft certs from our development machines to our Windows 2008 servers that host the application. What I did: I have attempted to copy the Root CA to every folder location for the Current User and Location Machine account stores that the company certificate's root is in. My questions: Could I be mishandling the certs anywhere else? Could there be a local/group policy that could be blocking the other certs from use? What (if anything) should have to be done differently on Windows 7 from 2008 in regards to IIS? Thanks for your help.

    Read the article

  • Multiple servers vs 1 big server performace

    - by pistacchio
    Hi to all! My team of developers has suggested a server structure for an upcoming project we are developing. Our structure is "logical", meaning that the various logical components of the application (it is a distributed one) relies on different servers. Some components are more critical than others and will be subjected to more load. Our proposal was to have 1 server per component but the hardware guys suggested to replace the various machines with a single, bigger one with virtual servers. They're gonna use Blade Servers. Now, I'm not an expert at all, but my question to the guys was: so if we need, for example, 3 2GHz CPU / 2GB RAM machines and you give me 1 machine with 3 2GHz CPUs and 6 GB of RAM it is the same? They told me it is. Is this accurate? What are the advantages or disadvantages of both the solutions? What are the generally accepted best practices? Could you point out some URL reference dealing with the problem? Thank you in advance! EDIT: Some more info. The (internet / intranet) application is already layered. We have some servers on the DMZ that will expose pages to the internet and the databases are on their own machines. What we want to split (and they want to join) are some webservers that mainly expose webservices. One is a DAL that communicates with the database layer, one is our Single Sign On / User Profile application that gets called once per page and one is a clone of what seen on the Internet to be used on our lan.

    Read the article

  • Pivot Table grand total across columns

    - by Jon
    I'm using Excel 2010 and Power Pivot. I'm trying to calculate confidence and velocity for a development team. I'm extracting some information from our time and defect system each day and building a data set. What I need to do with Excel is do the calculations. So each day I add to my data set 1 row per task in the current project, estimate for that task and the time spent on that task. What I want to calculate is the estimate/actual for each task but also for each person. The trouble is that each day the actual is cumulative so I need to pick out the maximum value for each task. The estimate should remain unchanged. I can make this work at the task level with a calculated measure (=MAX(worked)/MAX(estimate)) but I don't know how to total this up for a person. I need the sum of the max worked for each task. So a dataset might look like: Name Task Estimate Worked N1 T1 3 1 N2 T2 3 1 N3 T3 4 1 N1 T1 3 2 N2 T4 5 1 N3 T3 4 2 N1 T5 1 2 N2 T6 2 3 N3 T7 3 2 What I want to see is for task T1 2 days were worked against an estimate of 3 days - so 2/3. For person N1 I want to see that they worked a total of 4 days against an estimate of 4 days so 4/4. For person N2 they worked 5 days for an estimate of 10 days. Any ideas on how I can achieve this?

    Read the article

  • Best Asp.net Hosting

    - by dotnetguts
    There are many asp.net web hosting companies which spends lot on advertisement and also gives you very cheaper rate, as low as $5, but when it comes to support they are simply hopeless. Everyone can you please pass your experience with your past hosting companies and suggest any good asp.net hosting company? Please consider following requirement factors 1) Asp.net 3.5 or 4.0 supported. 2) Url Rewriter support 3) GZip support (Dynamic through code) 4) Initial Setup support (If required) 5) SQL Server 2005 or 2008 6) Allow to access SQL Server DB using SQL Mgmt Studio 7) Environment supporting Backup and Restore of DB on my own, without involving tech support team 8) Full Text Search support 9) FTP support 10) I can able to send atleast 500 Emails daily. 11) 99.9% Up Time (No matter all web hosting say they have 99.9% Up Time, but its not true). 12) Alert Email to be sent when they do any maintenance or during downtime. 13) Hosting Price should be reasonable. Incase you feel i am missing something please add to the list. Can anyone suggest good webhosting company based on above factors?

    Read the article

< Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >