Search Results

Search found 93926 results on 3758 pages for 'testing server'.

Page 239/3758 | < Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >

  • Is it possible to install Windows Server 2012 on a Dell PowerEdge with PERC S100 card?

    - by Warren P
    According to Dell's own website, the Windows Server 2012 media driver "inbox" does not contain drivers for the Perc S100 controller that is in a Dell PowerEdge T110 II server we'd like to evaluate with Server 2012. I have found drivers only for Server 2008 R2, which is what the server is currently running. Is it possible to upgrade this server? (Booting up the Server 2012 DVD image leads to the expected result that it can not locate the system's hard drives or its hard drive controller card, as the drivers for the PERC S100 are not on the installation DVD image.)

    Read the article

  • How Can I assign an IP address to my virtual Windows Server, so that I can start using it almost as a VPS?

    - by Nelson Symonds
    We are a small office set up with two PC's out of which one of my PCs runs 24hrs. Its almost equivalent to a small server, but right now we're in need of a server which is why I am planning to keep my machine as well as a server into a single PC. I've used VMware Workstation to create a powerful Windows Server 2008 within my PC and I want to attach it to my Network Switch through the same PC where I am hosting it. I want to use it almost like a physical server with an IP address and everything so that I can connect from one Pc to the Server directly or my applications can connect to Server straight with the IP address. How should I do this? Step by step instructions would be appreciated. Thanks in Advance, Best regards Nelson

    Read the article

  • Do I need to transfer Server license CALs to new Domain Controller during AD transition?

    - by drpcken
    I have an old Server 2003 domain controller I'm ready to decommission. I notice in Server 2003 there is a Licensing module under Administrative Tools that seems to manage and track user CAL's for the domain controller. I don't see this on my newly promoted Server 2008 domain controller, nor do I see any roles to add it. Does this need to be transferred to my new Server 2008 domain controller or will it all happen when the old server is decommissioned? I've already transferred all my Terminal Server licenses to the new server. Thank you!

    Read the article

  • What is the scope of CONTEXT_INFO in SQL Server?

    - by JasonS
    I am using CONTEXT_INFO to pass a username to a delete trigger for the purposes of an audit/history table. I'm trying to understand the scope of CONTEXT_INFO and if I am creating a potential race condition. Each of my database tables has a stored proc to handle deletes. The delete stored proc takes userId as an parameter, and sets CONTEXT_INFO to the userId. My delete trigger then grabs the CONTEXT_INFO and uses that to update an audit table that indicates who deleted the row(s). The question is, if two deletes sprocs from different users are executing at the same time, can CONTEXT_INFO set in one of the sprocs be consumed by the trigger fired by the other sproc? I've seen this article http://msdn.microsoft.com/en-us/library/ms189252.aspx but I'm not clear on the scope of sessions and batches in SQL Server which is key to the article being helpful! I'd post code, but short on time at the moment. I'll edit later if this isn't clear enough. Thanks in advance for any help.

    Read the article

  • How can I turn a column name into a result value in SQL Server?

    - by Brennan
    I have a table which has essentially boolean values in a legacy database. The column names are stored as string values in another table so I need to match the column names of one table to a string value in another table. I know there has to be a way to do this directly with SQL in SQL Server but it is beyond me. My initial thought was to use PIVOT but it is not enabled by default and enabling it would likely be a difficult process with pushing that change to the Production database. I would prefer to use what is enabled by default. I am considering using COALESCE to translate the boolean value to the string that value that I need. This will be a manual process. I think I will also use a table variable to insert the results of the first query into that variable and use those results to do the second query. I still have the problem that the columns are on a single row so I wish I could easily pivot the values to put the column names in the result set as strings. But if I could easily do that I could easily write the query with a sub-select. Any tips are welcome.

    Read the article

  • How can I stop SQL Server Management Studio replacing 'SELECT *' with the column list ?

    - by Ben McIntyre
    SQL Server Mgmt Studio is driving me crazy. If I create a view and SELECT '*' from a table, it's all OK and I can save the view. Looking at the SQL for the view (eg.by scripting a CREATE) reveals that the 'SELECT *' really is saved to the view's SQL. But as soon as I reopen the view using the GUI (right click modify), SELECT * is replaced with a column list of all the columns in the table. How can I stop Management Studio from doing this ? I want my 'SELECT *' to remain just that. Perhaps it's just the difficulty of googling 'SELECT *' that prevented me from finding anything remotely relevant to this (i did put it in double quotes). Please, I am highly experienced in Transact-SQL, so please DON'T give me a lecture on why I shouldn't be using SELECT *. I know all the pros and cons and I do use it at times. It's a language feature, and like all language features can be used for good or evil (I emphatically do NOT agree that it is never appropriate to use it). Edit: I'm giving Marc the answer, since it seems it is not possible to turn this behaviour off. Problem is considered closed. I note that Enterprise Manager did no similar thing. The workaround is to either edit SQL as text, or go to a product other than Managment Studio. Or constantly edit out the column list and replace the * every time you edit a view. Sigh.

    Read the article

  • How does C#'s DateTime.Now affect query plan caching in SQL Server?

    - by Bill Paetzke
    Given: Let's say we have a stored procedure. It reports data back to a user on a webpage. The user can set a date range. If the user sets today's date as the "end date," which includes today's data, the web app passes DateTime.Now to the sql proc. Let's say that one user runs a report--5/1/2010 to now--over and over several times. On the webpage, the user sees "5/1/2010" to "5/4/2010." But the web app passes DateTime.Now to the sql proc as the end date. So, the end date in the proc will always be different, although the user is querying a similar date range. Assume the number of records in the table and number of users are large. So any performance gains matter. Hence the importance of the question. Question: Does passing DateTime.Now as a parameter to a proc prevent SQL Server from caching the query plan? If so, then is the web app missing out on huge performance gains? Possible Solution: I thought DateTime.Today.AddDays(1) would be a possible solution. It would allow the user to get the latest data and always pass the same end date to the sql proc--"5/5/2010" in this case. Please speak to this as well. Sample proc and execution (if that helps to understand): CREATE PROCEDURE GetFooData @StartDate datetime @EndDate datetime AS SELECT * FROM Foo WHERE LogDate >= @StartDate AND LogDate < @EndDate Here's a sample execution using DateTime.Now: EXEC GetFooData '2010-05-01', '2010-05-04 15:41:27' -- passed in DateTime.Now Here's a sample execution using DateTime.Today.AddDays(1) EXEC GetFooData '2010-05-01', '2010-05-05' -- passed in DateTime.Today.AddDays(1) The same data is returned for both procs, since the current time is: 2010-05-04 15:41:27.

    Read the article

  • How does DateTime.Now affect query plan caching in SQL Server?

    - by Bill Paetzke
    Question: Does passing DateTime.Now as a parameter to a proc prevent SQL Server from caching the query plan? If so, then is the web app missing out on huge performance gains? Possible Solution: I thought DateTime.Today.AddDays(1) would be a possible solution. It would pass the same end-date to the sql proc (per day). And the user would still get the latest data. Please speak to this as well. Given Example: Let's say we have a stored procedure. It reports data back to a user on a webpage. The user can set a date range. If the user sets today's date as the "end date," which includes today's data, the web app passes DateTime.Now to the sql proc. Let's say that one user runs a report--5/1/2010 to now--over and over several times. On the webpage, the user sees 5/1/2010 to 5/4/2010. But the web app passes DateTime.Now to the sql proc as the end date. So, the end date in the proc will always be different, although the user is querying a similar date range. Assume the number of records in the table and number of users are large. So any performance gains matter. Hence the importance of the question. Example proc and execution (if that helps to understand): CREATE PROCEDURE GetFooData @StartDate datetime @EndDate datetime AS SELECT * FROM Foo WHERE LogDate >= @StartDate AND LogDate < @EndDate Here's a sample execution using DateTime.Now: EXEC GetFooData '2010-05-01', '2010-05-04 15:41:27' -- passed in DateTime.Now Here's a sample execution using DateTime.Today.AddDays(1) EXEC GetFooData '2010-05-01', '2010-05-05' -- passed in DateTime.Today.AddDays(1) The same data is returned for both procs, since the current time is: 2010-05-04 15:41:27.

    Read the article

  • Are "TDD Tests" different to Unit Tests?

    - by asgeo1
    I read this article about TDD and unit testing: http://stephenwalther.com/blog/archive/2009/04/11/tdd-tests-are-not-unit-tests.aspx I think it was an excellent article. The author makes a distinction between what he calls "TDD Tests" and unit testing. They appear to be different tests to him. Previous to reading this article I thought unit tests were a by-product of TDD. I didn't realise you might also create "TDD tests". The author seems to imply that creating unit tests is not enough for TDD as the granularity of a unit test is too small for what we are trying to achieve with TDD. So his TDD tests might test a few classes at once. At the end of the article there is some discussion from the author with some other people about whether there really is a distinction between "TDD Tests" and unit testing. Seems to be some contention around this idea. The example "TDD tests" the author showed at the end of the article just looked like normal MVC unit tests to me - perhaps "TDD tests" vs unit tests is just a matter of semantics? I would like to hear some more opinions on this, and whether there is / isn't a distinction between the two tests.

    Read the article

  • Oracle Database 11g R2 támogatott SAP alatt is

    - by Lajos Sárecz
    Húsvét óta már SAP alatt is használható az Oracle Database 11g R2. Köztudott, hogy az SAP csak a Release 2-re ad ki támogatást, így ez most egy igazán örömteli hír az SAP felhasználóknak, hiszen az alábbi 11g R2 újdonságokat tudják alkalmazni SAP környezetben: • Advanced Compression opció (táblára, RMAN mentésre, expdp-re, Data Guard hálózatra) • Real Application Testing • Oracle Database 11g Release 2 Database Vault • Oracle Database 11g Release 2 RAC • Advanced Encryption táblaterekre, RMAN mentésekre, expdp-re, Data Guard hálózatra • Direct NFS • Deferred Segments • Online Patching Azaz például tömöríthetové válik az SAP adatbázisa, vagy az abból készített mentések. Az eddigi tapasztalatok szerint a tömörítés aránya adatbázistól függoen 2-4-szeres. Az adatbázis upgrade és minden egyéb adatbázis infrastruktúrát érinto változatatás kockázata jelentosen csökkentheto lesz a Real Application Testing alkalmazásával. A rendszergazdai szerepkörök szeparaláhatóvá válnak a Database Vault felhasználásával. A Real Application Clusters 11g R2 újdonságai is elérheto lesznek. A Transparent Data Encryption révén a táblaterek és a mentések titkosíthatók úgy, hogy az alkalmazás számára mindez transzparens, azonban a médiához közvetlenül hozzáférve nem lesznek visszafejthetok az adatok. Támogatott lesz a Direct NFS kliens, ezzel NFS elérési sebesség jelentosen javul. A Deffered Segments révén pedig a tábla szegmensek csak akkor kerülnek lefoglalásra, amikor adat kerül a táblába. Ez azért hasznos, mert általában alkalmazások telepítésekor létrejön minden tábla, azonban sok táblába nem kerül adat. Ezáltal mind a telepítés ideje, mind az adatbázis mérete csökkentheto. Az Online Patching pedig lehetové teszi a leállításmentes patch telepítést. Hát azt gondolom ezek vonzó lehetoségek, érdemes betervezni a közeljövobe az SAP rendszerek alatti adatbázis frissítését, hiszen a 10g verzió Premier Support idén nyáron lejár. Az upgrade-hez pedig mindenképp javaslom a Real Application Testing használatát, amivel az éles terhelés mellett teszthelheto teszt környezetben az upgrade. A Sun Oracle Database Machine és az Exadata sajnos még nem támogatott SAP alatt, mivel az ASM certifikáció még nem zárult le. A hírek szerint 2011 elejére várható, hogy ez megtörténik.

    Read the article

  • Introducing NFakeMail

    - by João Angelo
    Ever had to resort to custom code to control emails sent by an application during integration and/or system testing? If you answered yes then you should definitely continue reading. NFakeMail makes it easier for developers to do integration/system testing on software that sends emails by providing a fake SMTP server. You’ll no longer have to manually validate the email sending process. It’s developed in C# and IronPython and targets the .NET 4.0 framework. With NFakeMail you can easily automate the testing of components that rely on sending mails while doing its job. Let’s take a look at some sample code, we start with a simple class containing a method that sends emails. class Notifier { public void Notify() { using (var smtpClient = new SmtpClient("localhost", 10025)) { smtpClient.Send("[email protected]", "[email protected]", "S1", "."); smtpClient.Send("[email protected]", "[email protected]", "S2", ".."); } } } Then to automate the tests for this method we only need to the following: [Test] public void Notify_T001() { using (var server = new FakeSmtpServer(10025)) { new Notifier().Notify(); // Verifies two messages are received in the next five seconds var messages = server.WaitForMessages(count: 2, timeout: 5000); // Verifies the message sender Debug.Assert(messages.All(m => m.From.Address == "[email protected]")); } } The created FakeSmtpServer instance will act as a simple SMTP server and intercept the messages sent by the Notifier class. It’s even possible to verify some fields of each intercepted message and by default all intercepted messages are saved to the file system in MIME format.

    Read the article

  • TDD - Outside In vs Inside Out

    - by Songo
    What is the difference between building an application Outside In vs building it Inside Out using TDD? These are the books I read about TDD and unit testing: Test Driven Development: By Example Test-Driven Development: A Practical Guide: A Practical Guide Real-World Solutions for Developing High-Quality PHP Frameworks and Applications Test-Driven Development in Microsoft .NET xUnit Test Patterns: Refactoring Test Code The Art of Unit Testing: With Examples in .Net Growing Object-Oriented Software, Guided by Tests---This one was really hard to understand since JAVA isn't my primary language :) Almost all of them explained TDD basics and unit testing in general, but with little mention of the different ways the application can be constructed. Another thing I noticed is that most of these books (if not all) ignore the design phase when writing the application. They focus more on writing the test cases quickly and letting the design emerge by itself. However, I came across a paragraph in xUnit Test Patterns that discussed the ways people approach TDD. There are 2 schools out there Outside In vs Inside Out. Sadly the book doesn't elaborate more on this point. I wish to know what is the main difference between these 2 cases. When should I use each one of them? To a TDD beginner which one is easier to grasp? What is the drawbacks of each method? Is there any materials out there that discuss this topic specifically?

    Read the article

  • Docker vs ESXi for Startup Projects - Deploying Code for Dev Testing

    - by JasonG
    Why hello there little programmer dude! I have a question for you and all of your experience and knowledge. I have an ESXi whitebox that I built which is an 8 dude that sits in the corner. I made a mistake recently and took the key that had ESXi, formatted it and used it for something else. No big deal because the last project I worked on had stalled out. I'm about to pick up another project and now I need to spin up a whole bunch of stuff for CI, qa + db, ticket tracker, wikis etc etc. I've been hearing a lot about Docker recently and as this is just a consumer grade machine, I'm wondering if it may make more sense for me to use Docker on OpenOS and then put everything there - bamboo or hudson, jira, confluence, postgress for the tools to use, then a qa env. I can't really seem to find any documents that directly compare traditional VM infrastructure vs docker solutions and I'm wondering if it is fair to compare. Is there any reason why CoreOS w/ containers would be a strictly worse solution? Or do you have any insight into why I may want to stick with ESXi? I've looked on multiple occasions and can't find a good reason not to. I'm not going to run a production env on the server so I don't need to have HA if updating security or OS for example where esxi would allow me to restart one vm at a time. I can just shut the thing down and bring it back up if I need a reboot no problem. So what's up with this container stuff? Is it a fair replacement for ESXi? I'm guessing the atlassian products would run much better and my ram would go a lot farther using docker. Probably the CPU would run much cooler too and my expensive HDD space would be better utilized.

    Read the article

  • How can Agile methodologies be adapted to High Volume processing system development?

    - by luckyluke
    I am developing high volume processing systems. Like mathematical models that calculate various parameters based on millions of records, calculated derived fields over milions of records, process huge files having transactions etc... I am well aware of unit testing methodologies and if my code is in C# I have no problem in unit testing it. Problem is I often have code in T-SQL, C# code that is a SQL stored assembly, and SSIS workflow with a good amount of logic (and outcomes etc) or some SAS process. What is the approach YOu use when developing such systems. I usually develop several tests as Stored procedures in a designed schema(TEST) and then automatically run them overnight and check out the results. But this is only for T-SQL. And Continous integration IS hard. But the problem is with testing SSIS packages. How do You test it? What is Your preferred approach for stubbing data into tables (especially if You need a lot data initialization). I have some approach derived over the years but maybe I am just not reading enough articles. So Banking, Telecom, Risk developers out there. How do You test your mission critical apps that process milions of records at end day, month end etc? What frameworks do You use? How do You validate that Your ssis package is Correct (as You develop it)/ How do You achieve continous integration in such an environment (Personally I never got there)? I hope this is not to open-ended question. How do You test Your map-reduce jobs for example (i do not use hadoop but this is quite similar). luke Hope that this is not too open ended

    Read the article

  • What are the design principles that promote testable code? (designing testable code vs driving design through tests)

    - by bot
    Most of the projects that I work on consider development and unit testing in isolation which makes writing unit tests at a later instance a nightmare. My objective is to keep testing in mind during the high level and low level design phases itself. I want to know if there are any well defined design principles that promote testable code. One such principle that I have come to understand recently is Dependency Inversion through Dependency injection and Inversion of Control. I have read that there is something known as SOLID. I want to understand if following the SOLID principles indirectly results in code that is easily testable? If not, are there any well-defined design principles that promote testable code? I am aware that there is something known as Test Driven Development. Although, I am more interested in designing code with testing in mind during the design phase itself rather than driving design through tests. I hope this makes sense. One more question related to this topic is whether it's alright to re-factor an existing product/project and make changes to code and design for the purpose of being able to write a unit test case for each module?

    Read the article

  • How to access the remote OPC server programatically ?

    - by Shailesh Jaiswal
    I have downloaded & installed the OPCDA.NET client component evaluation & XMLDA.NET client component evaluation. It provides some C# samples for browsing the available OPC Server, connecting to the OPC server, & browsing the available items on the server. I know the programatic way in which we can access the local OPC server. It is provided in the sample C# applications. I have installed the OPC server on another machine ( remote machine ). I have done all the required setting related to the 'dcomcnfg' utility. I can access the remote OPC server from client machine by using the Test Client provided by the OPCDA.NET client component evaluation & XMLDA.NET client component evaluation. But I am unaware of how this can be done programmatically. In the available C# samples I found no such programatic part (coding ) in which we can access the remote OPC server. Can you provide me the code through which I can browse the available remote machines in my network, available OPC server on each machine after selecting the specific machine name, connecting to the OPC server & browsing the available items on the server ? or Can you provide me the any link through which I can resolve the above issue ?

    Read the article

  • Windows 2003 IIS FTP Server Migration w/ User Accounts

    - by Brad
    I'm trying to figure out the best way to migrate an FTP server from old hardware to new hardware. The server is on a domain, but not all the users setup on the server (to use FTP) are domain accounts, some are local to the server. For example, I have users both ways: domain\username machinename\username The new machine name will be different. So I need to copy all the files with permissions in tact from the old server to the new server. Then I need to convert all the user accounts from the old server to the new server. Then I need to change the file permissions so that they are no longer oldserver\username but newserver\username. Can this be accomplished all with CALCS? Is there an easy way that perhaps I'm missing?

    Read the article

  • Minecraft server Rkit ubuntu upstart [closed]

    - by user1637491
    I have an Intel server running Ubuntu Server 12.04.1 I am working on moving my CraftBukkit Minecraft Server to the new platform. I read the upstart ubuntu cookbook and wrote a .conf file I have a minecraft user (named minecraft) and its home Directory is /home/minecraft it contains prwxrwxrwx 1 minecraft minecraft 0 Sep 19 14:49 command-fifo drwx------ 8 minecraft minecraft 4096 Sep 19 14:50 HDsaves drwx------ 2 minecraft minecraft 4096 Aug 31 15:13 logrolls -rw-r--r-- 1 root root 5 Sep 19 14:49 minecraft.pid drwxrwxrwx 8 minecraft minecraft 180 Sep 19 14:49 ramdisk -rw------- 1 minecraft minecraft 119 Sep 19 10:34 save.sh drwxrwxrwx 9 minecraft minecraft 4096 Sep 19 14:50 server -rw-rw-r-- 1 minecraft minecraft 44 Aug 31 11:40 shutdown.sh the server directory contains drwxrwxrwx 6 minecraft minecraft 4096 Aug 30 13:32 Backups -rwxrwxrwx 1 minecraft minecraft 0 Sep 18 12:26 banned-ips.txt -rwxrwxrwx 1 minecraft minecraft 17 Sep 18 12:26 banned-players.txt drwxrwxrwx 4 minecraft minecraft 4096 Aug 30 12:26 buildcraft -rwxrwxrwx 1 minecraft minecraft 1447 Sep 18 12:26 bukkit.yml -rwxrwxrwx 1 minecraft minecraft 0 Aug 30 11:05 command-fifo drwxrwxrwx 2 minecraft minecraft 4096 Aug 30 12:26 config lrwxrwxrwx 1 minecraft minecraft 23 Sep 19 14:49 craftbukkit.jar -> ramdisk/craftbukkit.jar -rwxrwxrwx 1 minecraft minecraft 17419 Sep 18 12:26 ForgeModLoader-0.log -rwxrwxrwx 1 minecraft minecraft 17420 Sep 18 12:24 ForgeModLoader-1.log -rwxrwxrwx 1 minecraft minecraft 17420 Sep 18 11:53 ForgeModLoader-2.log -rwxrwxrwx 1 minecraft minecraft 2576 Aug 30 11:05 help.yml drwxrwxrwx 2 minecraft minecraft 4096 Aug 30 12:31 lib drwxrwxrwx 3 minecraft minecraft 4096 Sep 19 14:49 logrolls -rwxrwxrwx 1 minecraft minecraft 200035 Sep 4 17:58 Minecraft_RKit.jar lrwxrwxrwx 1 minecraft minecraft 12 Sep 19 14:49 mods -> ramdisk/mods -rwxrwxrwx 1 minecraft minecraft 5 Sep 18 12:26 ops.txt -rwxrwxrwx 1 minecraft minecraft 0 Aug 30 11:05 permissions.yml lrwxrwxrwx 1 minecraft minecraft 15 Sep 19 14:49 plugins -> ramdisk/plugins lrwxrwxrwx 1 minecraft minecraft 16 Sep 19 14:49 redpower -> ramdisk/redpower -rw-r--r-- 1 root root 255 Sep 19 15:10 server.log -rwxrwxrwx 1 minecraft minecraft 464 Sep 8 11:09 server.properties drwxrwxrwx 3 minecraft minecraft 4096 Sep 5 16:05 SpaceModule drwxrwxrwx 3 minecraft minecraft 4096 Aug 30 13:07 toolkit -rwxrwxrwx 1 minecraft minecraft 1433 Sep 14 21:04 wepif.yml -rwxrwxrwx 1 minecraft minecraft 0 Sep 18 12:26 white-list.txt lrwxrwxrwx 1 minecraft minecraft 13 Sep 19 14:49 world -> ramdisk/world lrwxrwxrwx 1 minecraft minecraft 20 Sep 19 14:49 world_nether -> ramdisk/world_nether lrwxrwxrwx 1 minecraft minecraft 21 Sep 19 14:49 world_the_end -> ramdisk/world_the_end the startup .conf file: # Starts the minecraft server after loading JRE from ramdisk # # for now im still working on it description "minecraft-server" start on filesystem or runlevel [2345] stop on runlevel [!2345] oom score -999 kill timeout 60 pre-start script sh /usr/lib/jvm/java.sh end script script cd /home/minecraft echo "$(date) Starting minecraft" sudo cp -r /home/minecraft/HDsaves/* ramdisk sudo chown -R minecraft:minecraft ramdisk sudo chmod -R 777 ramdisk sudo ln -sf ramdisk/* server sudo chown -R minecraft:minecraft server sudo chmod -R 777 server sudo mv server/server.log server/logrolls/ zip server/logrolls/temp.zip server/logrolls/server.log sudo mv server/logrolls/temp.zip server/logrolls/"$(date)".log.zip sudo rm server/logrolls/server.log sudo rm -f command-fifo sudo mkfifo command-fifo sudo chown minecraft:minecraft command-fifo sudo chmod 777 command-fifo echo "$(date) Root commands finished" echo "$(date) Starting Wrapper" cd server sudo -u minecraft java -Xmx30M -Xms30M -XX:MaxPermSize=40M -Djava.awt.headless=true -jar Minecraft_RKit.jar timv:*spoilers* <> /home/minecraft/command-fifo & sudo echo $! >| /home/minecraft/minecraft.pid echo "$(date) Minecraft Started" end script pre-stop script cd /home/minecraft PID=`cat minecraft.pid` if [ "$PID" != "" ]; then echo "Stopping MineCraft Server PID=$PID" sudo echo save-all >> command-fifo sudo echo .stopwrapper >> command-fifo wait $PID sudo rm minecraft.pid sudo rsync -rt --delete ramdisk/* HDsaves/ echo "$(date) ramdisk save complete" echo "MineCraft save-shutdown complete." else echo "MineCraft not running" fi end script so when I start it up the upstart gererated log says: Wed Sep 19 14:49:30 CDT 2012 Starting minecraft adding: server/logrolls/server.log (stored 0%) Wed Sep 19 14:49:56 CDT 2012 Root commands finished Wed Sep 19 14:49:56 CDT 2012 Starting Wrapper Wed Sep 19 14:49:56 CDT 2012 Minecraft Started

    Read the article

  • SQL Server 2005 Blocking Problem (ASYNC_NETWORK_IO)

    - by ivankolo
    I am responsible for a third-party application (no access to source) running on IIS and SQL Server 2005 (500 concurrent users, 1TB data, 8 IIS servers). We have recently started to see significant blocking on the database (after months of running this application in production with no problems). This occurs at random intervals during the day, approximately every 30 minutes, and affects between 20 and 100 sessions each time. All of the sessions eventually hit the application time out and the sessions abort. The problem disappears and then gradually re-emerges. The SPID responsible for the blocking always has the following features: WAIT TYPE = ASYNC_NETWORK_IO The SQL being run is “(@claimid varchar(15))SELECT claimid, enrollid, status, orgclaimid, resubclaimid, primaryclaimid FROM claim WHERE primaryclaimid = @claimid AND primaryclaimid < claimid)”. This is relatively innocuous SQL that should only return one or two records, not a large dataset. NO OTHER SQL statements have been implicated in the blocking, only this SQL statement. This is parameterized SQL for which an execution plan is cached in sys.dm_exec_cached_plans. This SPID has an object-level S lock on the claim table, so all UPDATEs/INSERTs to the claim table are also blocked. HOST ID varies. Different web servers are responsible for the blocking sessions. E.g., sometimes we trace back to web server 1, sometimes web server 2. When we trace back to the web server implicated in the blocking, we see the following: There is always some sort of application related error in the Event Log on the web server, linked to the Host ID and Host Process ID from the SQL Session. The error messages vary, usually some sort of SystemOutofMemory. (These error messages seem to be similar to error messages that we have seen in the past without such dramatic consequences. We think was happening before, but didn’t lead to blocking. Why now?) No known problems with the network adapters on either the web servers or the SQL server. (In any event the record set returned by the offending query would be small.) Things ruled out: Indexes are regularly defragmented. Statistics regularly updated. Increased sample size of statistics on claim.primaryclaimid. Forced recompilation of the cached execution plan. Created a compound index with primaryclaimid, claimid. No networking problems. No known issues on the web server. No changes to application software on web servers. We hypothesize that the chain of events goes something like this: Web server process submits SQL above. SQL server executes the SQL, during which it acquires a lock on the claim table. Web server process gets an error and dies. SQL server session is hung waiting for the web server process to read the data set. SQL Server sessions that need to get X locks on parts of the claim table (anyone processing claims) are blocked by the lock on the claim table and remain blocked until they all hit the application time out. Any suggestions for troubleshooting while waiting for the vendor's assistance would be most welcome. Is there a way to force SQL Server to lock at the row/page level for this particular SQL statement only? Is there a way to set a threshold on ASYNC_NETWORK_IO waits only?

    Read the article

  • Operating System for first production windows server code?

    - by Unkwntech
    I'm getting ready to purchase a server for me to deploy my first windows based (C# .NET) application but I'm not familiar with using windows for hosted applications. I have the choice of the following versions of windows: Windows Server 2003 Windows Server 2003 Web Windows Server 2003 Enterprise Windows Server 2008 Windows Server 2008 Web Windows Server 2008 Enterprise Would one of these be better for deploying a C# .NET application? EDIT: There will be 3 applications deployed but each will basically be the same, they will be services that are driven from data being brought in from a website that will likely run on the same server. http://www.microsoft.com/windowsserver2008/en/us/compare-roles.aspx

    Read the article

  • Performance improvement of client server system

    - by Tanuj
    I have a legacy client server system where the server maintains a record of some data stored in a sqlite database. The data is related to monitoring access patterns of files stored on the server. The client application is basically a remote viewer of the data. When the client is launched, it connects to the server and gets the data from the server to display in a grid view. The data gets updated in real time on the server and the view in the client automatically gets refreshed. There are two problems with the current implementation: When the database gets too big, it takes a lot of time to load the client. What are the best ways to deal with this. One option is to maintain a cache at the client side. How to best implement a cache ? How can the server maintain a diff so that it only sends the diff during the refresh cycle. There can be multiple clients and each client needs to display the latest data available on the server. The server is a windows service daemon. Both the client and the server are implemented in C#

    Read the article

  • Choosing approach for an IM client-server app

    - by John
    Update: totally re-wrote this to be more succint. I'm looking at a new application, one part of which will be very similar to standard IM clients, i.e text chat, ability to send attachments, maybe some real-time interaction like a multi-user whiteboard. It will be client-server, i.e all traffic goes through my central server. That means if I want to support cross-communication with other IM systems, I am still free to pick any protocol for my own client<--server communication - my server can use XMPP or whatever to talk to other systems. Clients are expected to include desktop apps, but probably also browser-based as well either through Flex/Silverlight or HTML/AJAX. I see 3 options for my own client-server communication layer: XMPP. The benefits are clients already exist as do open-source servers. However it requires the most up-front research/learning and also appears like it might raise legal issues due to GPL. Custom sockets. A server app makes connections with the clients, allowing any text/binary data to be sent very fast. However this approach requires building said server from scratch, and also makes a JS client tricky Servlets (or similar web server). Using tried and tested Java web-stack, clients send HTTP requests similar to AJAX-based websites. The benefit is the server is easy to write using well-established technologies, and easy to talk to. But what restrictions would this bring? Is it appropriate technology for real-time communication? Advice and suggests are welcome, especially what pros and cons surround using a web-server approach as compared to a socket-based approach.

    Read the article

  • Change object on client side or on server side

    - by Polina Feterman
    I'm not sure what is the best practice. I have some big and complex objects (NOT flat). In that object I have many related objects - for example Invoice is the main class and one of it's properties is invoiceSupervisor - a big class by it's own called User. User can also be not flat and have department property - also an object called Department. For example I want create new Invoice. First way: I can present to client several fields to fill in. Some of them will be combos that I will need to fill with available values. For example available invoiceSupervisors. Then all the chosen values I can send to server and on server I can create new Invoice and assign all chosen values to that new Invoice. Then I will need to assign new supervisor I will pull the chosen User by id that user picked up on server from combobox. I might do some verification on the User such as does the user applicable to be invoice supervisor. Then I will assign the User object to invoiceSupervisor. Then after filling all properties I will save the new invoice. Second way: In the beginning I can call to server to get a new Invoice. Then on client I can fill all chosen values , for example I can call to server to get new User object and then fill it's id from combobox and assign the User as invoiceSupervisor. After filling the Invoice object on client I can send it to server and then the server will save the new invoice. Before saving server can run some validations as well. So what is the best approach - to make the object on client and send it to server or to collect all values from client and to make a new object on server using those values ?

    Read the article

  • Migrating SQL Server Databases – The DBA’s Checklist (Part 1)

    - by Sadequl Hussain
    It is a fact of life: SQL Server databases change homes. They move from one instance to another, from one version to the next, from old servers to new ones.  They move around as an organisation’s data grows, applications are enhanced or new versions of the database software are released. If not anything else, servers become old and unreliable and databases eventually need to find a new home. Consider the following scenarios: 1.     A new  database application is rolled out in a production server from the development or test environment 2.     A copy of the production database needs to be installed in a test server for troubleshooting purposes 3.     A copy of the development database is regularly refreshed in a test server during the system development life cycle 4.     A SQL Server is upgraded to a newer version. This can be an in-place upgrade or a side-by-side migration 5.     One or more databases need to be moved between different instances as part of a consolidation strategy. The instances can be running the same or different version of SQL Server 6.     A database has to be restored from a backup file provided by a third party application vendor 7.     A backup of the database is restored in the same or different instance for disaster recovery 8.     A database needs to be migrated within the same instance: a.     Files are moved from direct attached storage to storage area network b.    The same database is copied under a different name for another application Migrating SQL Server database applications is a complex topic in itself. There are a number of components that can be involved: jobs, DTS or SSIS packages, logins or linked servers are only few pieces of the puzzle. However, in this article we will focus only on the central part of migration: the installation of the database itself. Unless it is an in-place upgrade, typically the database is taken from a source server and installed in a destination instance.  Most of the time, a full backup file is used for the rollout. The backup file is either provided to the DBA or the DBA takes the backup and restores it in the target server. Sometimes the database is detached from the source and the files are copied to and attached in the destination. Regardless of the method of copying, moving, refreshing, restoring or upgrading the physical database, there are a number of steps the DBA should follow before and after it has been installed in the destination. It is these post database installation steps we are going to discuss below. Some of these steps apply in almost every scenario described above while some will depend on the type of objects contained within the database.  Also, the principles hold regardless of the number of databases involved. Step 1:  Make a copy of data and log files when attaching and detaching When detaching and attaching databases, ensure you have made copies of the data and log files if the destination is running a newer version of SQL Server. This is because once attached to a newer version, the database cannot be detached and attached back to an older version. Trying to do so will give you a message like the following: Server: Msg 602, Level 21, State 50, Line 1 Could not find row in sysindexes for database ID 6, object ID 1, index ID 1. Run DBCC CHECKTABLE on sysindexes. Connection Broken If you try to backup the attached database and restore it in the source, it will still fail. Similarly, if you are restoring the database in a newer version, it cannot be backed up or detached and put back in an older version of SQL. Unlike detach and attach method though, you do not lose the backup file or the original database here. When detaching and attaching a database, it is important you keep all the log files available along with the data files. It is possible to attach a database without a log file and SQL Server can be instructed to create a new log file, however this does not work if the database was detached when the primary file group was read-only. You will need all the log files in such cases. Step 2: Change database compatibility level Once the database has been restored or attached to a newer version of SQL Server, change the database compatibility level to reflect the newer version unless there is a compelling reason not to do so. When attaching or restoring from a previous version of SQL, the database retains the older version’s compatibility level.  The only time you would want to keep a database with an older compatibility level is when the code within your database is no longer supported by SQL Server. For example, outer joins with *= or the =* operators were still possible in SQL 2000 (with a warning message), but not in SQL 2005 anymore. If your stored procedures or triggers are using this form of join, you would want to keep the database with an older compatibility level.  For a list of compatibility issues between older and newer versions of SQL Server databases, refer to the Books Online under the sp_dbcmptlevel topic. Application developers and architects can help you in deciding whether you should change the compatibility level or not. You can always change the compatibility mode from the newest to an older version if necessary. To change the compatibility level, you can either use the database’s property from the SQL Server Management Studio or use the sp_dbcmptlevel stored procedure.   Bear in mind that you cannot run the built-in reports for databases from SQL Server Management Studio if you keep the database with an older compatibility level. The following figure shows the error message I received when trying to run the “Disk Usage by Top Tables” report against a database. This database was hosted in a SQL Server 2005 system and still had a compatibility mode 80 (SQL 2000).     Continues…

    Read the article

< Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >