Search Results

Search found 23901 results on 957 pages for 'deployment process'.

Page 212/957 | < Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >

  • Setting classpath java for use in Runtime.exec

    - by phil swenson
    I am trying to spawn a process using Runtime.exec. I want to use my current classpath : System.getProperty("java.class.path") Unfortunately, I am having all kinds of issues. When it works on my mac, it doesn't work on Windows. And doesn't work on my mac ever when there is a space in the classpath. The error I always get is ClassDefNotFound, so it's related to how I'm building and passing in the classpath. here is some sample code: String startClass = "com.test.MyClass" String javaHome = System.getProperty("java.home"); String javaCmd = javaHome + "/bin/java"; String classPath = "-Djava.class.path=" + System.getProperty("java.class.path"); String[] commands = new String[]{javaCmd, classPath, startClass}; String commandString = StringUtils.join(commands, " "); Process process = Runtime.getRuntime().exec(commandString); So, how should I setup the classpath? Thanks for any help

    Read the article

  • Can you hep me with my Perl homework?

    - by riya
    Could someone write simple Perl programs for the following scenarios: convert a list from {1,2,3,4,5,7,9,10,11,12,34} to {1-5,7,9-12,34} to sort a list of negative numbers to insert values to hash array there is a file with content: C1 c2 c3 c4 r1 r2 r3 r4 put it into an hash array where keys = {c1,c2,c3,c4} and values = {r1,r2,r3,r4} There are testcases running each testcase runs as a process and has a process ID. The logs are logged in a logfile process ID appended to each line. Prog to find out if the test case has passed or failed. The program shoud be running till the processes are running and display output.

    Read the article

  • Django ORM and multiprocessing

    - by Ankur Gupta
    Hi, I am using Django ORM in my python script in a decoupled fashion i.e. it's not running in context of a normal Django Project. I am also using the multi processing module. And different process in turn are making queries. The process ran successfully for an hr and exited with this message "IOError: [Errno 32] Broken pipe" Upon futhur diagnosis and debugging this error pops up when I call save() on the model instance. I am wondering Is Django ORM Process save ? Why would this error arise else ? Cheers Ankur

    Read the article

  • How do I open an already opened file with a .net StreamReader?

    - by Jon Cage
    I have some .csv files which I'm using as part of a test bench. I can open them and read them without any problems unless I've already got the file open in Excel in which case I get an IOException: System.IO.IOException : The process cannot access the file 'TestData.csv' because it is being used by another process. This is a snippet from the test bench: using (CsvReader csv = new CsvReader(new StreamReader(new FileStream(fullFilePath, FileMode.Open, FileAccess.Read)), false)) { // Process the file } Is this a limitation of StreamReader? I can open the file in other applications (Notepad++ for example) so it can't be an O/S problem. Maybe I need to use some other class? If anyone knows how I can get round this (aside from closing excel!) I'd be very grateful.

    Read the article

  • What some good books on software testing/quality?

    - by mjh2007
    I'm looking for a good book on software quality. It would be helpful if the book covered: The software development process (requirements, design, coding, testing, maintenance) Testing roles (who performs each step in the process) Testing methods (white box and black box) Testing levels (unit testing, integration testing, etc) Testing process (Agile, waterfall, spiral) Testing tools (simulators, fixtures, and reporting software) Testing of embedded systems The goal here is to find an easy to read book that summarizes the best practices for ensuring software quality in an embedded system. It seems most texts cover the testing of application software where it is simpler to generate automated test cases or run a debugger. A book that provided solutions for improving quality in a system where the tests must be performed manually and therefore minimized would be ideal.

    Read the article

  • Best recruiting SaaS available for tech startups?

    - by ajhit406
    I run a small startup and am always on the lookout for quality engineers. I've seen someone solicit SO with a similar question but there was only one response so I'm going to solicit the community again. (http://stackoverflow.com/questions/112766/free-application-to-keep-track-of-a-recruiting-process didn't suffice for me) Over a long period of time, I've found that my system of recruiting is incredibly inefficient. An applicant who might have been attractive the first week might become overshadowed by other applicants further down the road. Sometimes, applicants who I ignored become relevant to a new position that opens up as an web app becomes more robust and takes a turn in a direction I didn't consider. These are all difficult to track. Knowing that recruiting intelligent people should be an ongoing process, what are the best web applications for managing the process? Are there any apps with features catered specifically towards tech startups? (Free or paid, doesn't matter).

    Read the article

  • kill application remotely

    - by Burak
    Hello all, i have sth.bat file which launches my java program on compuiter A. i start this application from computer B by using "psstart \computerA "c:\sth.bat" ". but i when it comes to kill it in the same way, im limited with the process name. Because when sth.bat is run, i see a cmd.exe and java.exe in process list. I have to use the process name with "pskill \computerA processName". But i have more than one applications named cmd.exe and java.exe. How can i solve this problem?

    Read the article

  • XMLReader in silverlight <test /> type tag problem

    - by Ummar
    Hi I am parsing XML in silverlight, in my XML I have one tag is like <test attribute1="123" /> <test1 attribute2="345">abc text</test1> I am using XMLReader to parse xml like using (XmlReader reader = XmlReader.Create(new StringReader(xmlString))) { // Parse the file and display each of the nodes. while (reader.Read()) { switch (reader.NodeType) { case XmlNodeType.Element: //process start tag here break; case XmlNodeType.Text: //process text here break; case XmlNodeType.XmlDeclaration: case XmlNodeType.ProcessingInstruction: break; case XmlNodeType.Comment: break; case XmlNodeType.EndElement: //process end tag here break; } } } but the problem is that for test tag no EndElement is received? which is making my whole program logic wrong. (for test1 tag all works fine). Please help me out.

    Read the article

  • Problem with SqlServer 2005 when openning connections

    - by Jose Obregon
    I have a Winforms application and I use EntLib to connect to a SQL Server 2005 DB. The application is working ok, but somethings, and lately more often, we have started receiving this error from the db when openning the connection: A connection was successfully established with the server, but then an error occurred during the login process. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.) (Microsoft SQL Server, Error: 233) The problem is intermittent. The user works nice for a couple of hours and then suddenly the exception is thrown. Sometimes it happens when we run a small process that loads a file and then inserts the data to the db. Please if anybody has any thought on this help me

    Read the article

  • Sending and receiving async over multiprocessing.Pipe() in Python

    - by dcolish
    I'm having some issues getting the Pipe.send to work in this code. What I would ultimately like to do is send and receive messages to and from the foreign process while its running in a fork. This is eventually going to be integrated into a pexpect loop for talking to interpreter processes. ` from multiprocessing import Process, Pipe def f(conn): cmd = '' if conn.poll(): cmd = conn.recv() i = 1 i += 1 conn.send([42 + i, cmd, 'hello']) if __name__ == '__main__': parent_conn, child_conn = Pipe() p = Process(target=f, args=(child_conn,)) p.start() from pdb import set_trace; set_trace() while parent_conn.poll(): print parent_conn.recv() # prints "[42, None, 'hello']" parent_conn.send('OHHAI') p.join() `

    Read the article

  • Guidance: How to layout you files for an Ideal Solution

    - by Martin Hinshelwood
    Creating a solution and having it maintainable over time is an art and not a science. I like being pedantic and having a place for everything, no matter how small. For setting up the Areas to run Multiple projects under one solution see my post on  When should I use Areas in TFS instead of Team Projects and for an explanation of branching see Guidance: A Branching strategy for Scrum Teams. Update 17th May 2010 – We are currently trialling running a single Sprint branch to improve our history. Whenever I setup a new Team Project I implement the basic version control structure. I put “readme.txt” files in the folder structure explaining the different levels, and a solution file called “[Client].[Product].sln” located at “$/[Client]/[Product]/DEV/Main” within version control. Developers should add any projects you need to create to that solution in the format “[Client].[Product].[ProductArea].[Assembly]” and they will automatically be picked up and built automatically when you setup Automated Builds using Team Foundation Build. All test projects need to be done using MSTest to get proper IDE and Team Foundation Build integration out-of-the-box and be named for the assembly that it is testing with a naming convention of “[Client].[Product].[ProductArea].[Assembly].Tests” Here is a description of the folder layout; this content should be replicated in readme files under version control in the relevant locations so that even developers new to the project can see how to do it. Figure: The Team Project level - at this level there should be a folder for each the products that you are building if you are using Areas correctly in TFS 2010. You should try very hard to avoided spaces as these things always end up in a URL eventually e.g. "Code Auditor" should be "CodeAuditor". Figure: Product Level - At this level there should be only 3 folders (DEV, RELESE and SAFE) all of which should be in capitals. These folders represent the three stages of your application production line. Each of them may contain multiple branches but this format leaves all of your branches at the same level. Figure: The DEV folder is where all of the Development branches reside. The DEV folder will contain the "Main" branch and all feature branches is they are being used. The DEV designation specifies that all code in every branch under this folder has not been released or made ready for release. And feature branches MUST merge (Forward Integrate) from Main and stabilise prior to merging (Reverse Integration) back down into Main and being decommissioned. Figure: In the Feature branching scenario only merges are allowed onto Main, no development can be done there. Once we have a mature product it is important that new features being developed in parallel are kept separate. This would most likely be used if we had more than one Scrum team working on a single product. Figure: when we are ready to do a release of our software we will create a release branch that is then stabilised prior to deployment. This protects the serviceability of of our released code allowing developers to fix bugs and re-release an existing version. Figure: All bugs found on a release are fixed on the release.  All bugs found in a release are fixed on the release and a new deployment is created. After the deployment is created the bug fixes are then merged (Reverse Integration) into the Main branch. We do this so that we separate out our development from our production ready code.  Figure: SAFE or RTM is a read only record of what you actually released. Labels are not immutable so are useless in this circumstance.  When we have completed stabilisation of the release branch and we are ready to deploy to production we create a read-only copy of the code for reference. In some cases this could be a regulatory concern, but in most cases it protects the company building the product from legal entanglements based on what you did or did not release. Figure: This allows us to reference any particular version of our application that was ever shipped.   In addition I am an advocate of having a single solution with all the Project folders directly under the “Trunk”/”Main” folder and using the full name for the project folders.. Figure: The ideal solution If you must have multiple solutions, because you need to use more than one version of Visual Studio, name the solutions “[Client].[Product][VSVersion].sln” and have it reside in the same folder as the other solution. This makes it easier for Automated build and improves the discoverability of your code and its dependencies. Send me your feedback!   Technorati Tags: VS ALM,VSTS Developing,VS 2010,VS 2008,TFS 2010,TFS 2008,TFBS

    Read the article

  • debugging error -- error attaching to w3wp.exe

    - by George2
    Hello everyone, I am using VSTS 2008 + .Net 3.5 + C#. And I developed a custom Forms authentication module for IIS 7.0 and I attach to w3wp.exe to debug this module. During the attach process (I just select Tools - Attach to Process, no further operation performed on the computer I am debugging -- I just wait for the attach to be completed), I met with the following error, any ideas what is wrong? The web server process that was being debugged has been terminated by Internet Information Services (IIS). This can be avoided by configuring Application Pool ping settings in IIS. See help for further details. thanks in advance, George

    Read the article

  • Problem with SqlServer 2005 when opening connections

    - by Jose Obregon
    I have a Winforms application and I use EntLib to connect to a SQL Server 2005 DB. The application is working ok, but sometimes, and lately more often, we have started receiving this error from the db when opening the connection: A connection was successfully established with the server, but then an error occurred during the login process. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.) (Microsoft SQL Server, Error: 233) The problem is intermittent. The user works well for a couple of hours and then suddenly the exception is thrown. Sometimes it happens when we run a small process that loads a file and then inserts the data to the db.

    Read the article

  • pure-specifier on function-definition

    - by bebul
    While compiling on GCC I get the error: pure-specifier on function-definition, but not when I compile the same code using VS2005. class Dummy { //error: pure-specifier on function-definition, VS2005 compiles virtual void Process() = 0 {}; }; But when the definition of this pure virtual function is not inline, it works: class Dummy { virtual void Process() = 0; }; void Dummy::Process() {} //compiles on both GCC and VS2005 What does the error means? Why cannot I do it inline? Is it legal to evade the compile issue as shown in the second code sample?

    Read the article

  • Database-as-a-Service on Exadata Cloud

    - by Gagan Chawla
    Note – Oracle Enterprise Manager 12c DBaaS is platform agnostic and is designed to work on Exadata/non-Exadata, physical/virtual, Oracle/non Oracle platforms and it’s not a mandatory requirement to use Exadata as the base platform. Database-as-a-Service (DBaaS) is an important trend these days and the top business drivers motivating customers towards private database cloud model include constant pressure to reduce IT Costs and Complexity, and also to be able to improve Agility and Quality of Service. The first step many enterprises take in their journey towards cloud computing is to move to a consolidated and standardized environment and Exadata being already a proven best-in-class popular consolidation platform, we are seeing now more and more customers starting to evolve from Exadata based platform into an agile self service driven private database cloud using Oracle Enterprise Manager 12c. Together Exadata Database Machine and Enterprise Manager 12c provides industry’s most comprehensive and integrated solution to transform from a typical silo’ed environment into enterprise class database cloud with self service, rapid elasticity and pay-per-use capabilities.   In today’s post, I’ll list down the important steps to enable DBaaS on Exadata using Enterprise Manager 12c. These steps are chalked down based on a recent DBaaS implementation from a real customer engagement - Project Planning - First step involves defining the scope of implementation, mapping functional requirements and objectives to use cases, defining high availability, network, security requirements, and delivering the project plan. In a Cloud project you plan around technology, business and processes all together so ensure you engage your actual end users and stakeholders early on in the project right from the scoping and planning stage. Setup your EM 12c Cloud Control Site – Once the project plan approval and sign off from stakeholders is achieved, refer to EM 12c Install guide and these are some important tips to follow during the site setup phase - Review the new EM 12c Sizing paper before you get started with install Cloud, Chargeback and Trending, Exadata plug ins should be selected to deploy during install Refer to EM 12c Administrator’s guide for High Availability, Security, Network/Firewall best practices and options Your management and managed infrastructure should not be combined i.e. EM 12c repository should not be hosted on same Exadata where target Database Cloud is to be setup Setup Roles and Users – Cloud Administrator (EM_CLOUD_ADMINISTRATOR), Self Service Administrator (EM_SSA_ADMINISTRATOR), Self Service User (EM_SSA_USER) are the important roles required for cloud lifecycle management. Roles and users are managed by Super Administrator via Setup menu –> Security option. For Self Service/SSA users custom role(s) based on EM_SSA_USER should be created and EM_USER, PUBLIC roles should be revoked during SSA user account creation. Configure Software Library – Cloud Administrator logs in and in this step configures software library via Enterprise menu –> provisioning and patching option and the storage location is OMS shared filesystem. Software Library is the centralized repository that stores all software entities and is often termed as ‘local store’. Setup Self Update – Self Update is one of the most innovative and cool new features in EM 12c framework. Self update can be accessed via Setup -> Extensibility option by Super Administrator and is the unified delivery mechanism to get all new and updated entities (Agent software, plug ins, connectors, gold images, provisioning bundles etc) in EM 12c. Deploy Agents on all Compute nodes, and discover Exadata targets – Refer to Exadata discovery cookbook for detailed walkthrough to ensure successful discovery of Exadata targets. Configure Privilege Delegation Settings – This step involves deployment of privilege setting template on all the nodes by Super Administrator via Setup menu -> Security option with the option to define whether to use sudo or powerbroker for all provisioning and patching operations. Provision Grid Infrastructure with RAC Database on Compute Nodes – Software is provisioned in this step via a provisioning profile using EM 12c database provisioning. In case of Exadata, Grid Infrastructure and RAC Database software is already deployed on compute nodes via OneCommand from Oracle, so SSA Administrator just needs to discover Oracle Homes and Listener as EM targets. Databases will be created as and when users request for databases from cloud. Customize Create Database Deployment Procedure – the actual database creation steps are "templatized" in this step by Self Service Administrator and the newly saved deployment procedure will be used during service template creation in next step. This is an important step and make sure you have locked all the required variables marked as locked as ‘Y’ in this table. Setup Self Service Portal – This step involves setting up of zones, user quotas, service templates, chargeback plan. The SSA portal is setup by Self Service Administrator via Setup menu -> Cloud -> Database option and following guided workflow. Refer to DBaaS cookbook for details. You also have an option to customize SSA login page via steps documented in EM 12c Cloud Administrator’s guide Final Checks – Define and document process guidelines for SSA users and administrators. Get your SSA users trained on Self Service Portal features and overall DBaaS model and SSA administrators should be familiar with Self Service Portal setup pieces, EM 12c database lifecycle management capabilities and overall EM 12c monitoring framework. GO LIVE – Announce rollout of Database-as-a-Service to your SSA users. Users can login to the Self Service Portal and request/monitor/view their databases in Exadata based database cloud. Congratulations! You just delivered a successful database cloud implementation project! In future posts, we will cover these additional useful topics around database cloud – DBaaS Implementation tips and tricks – right from setup to self service to managing the cloud lifecycle ‘How to’ enable real production databases copies in DBaaS with rapid provisioning in database cloud Case study of a customer who recently achieved success with their transformational journey from traditional silo’ed environment on to Exadata based database cloud using Enterprise Manager 12c. More Information – Podcast on Database as a Service using Oracle Enterprise Manager 12c Oracle Enterprise Manager 12c Installation and Administration guide, Cloud Administration guide DBaaS Cookbook Exadata Discovery Cookbook Screenwatch: Private Database Cloud: Set Up the Cloud Self-Service Portal Screenwatch: Private Database Cloud: Use the Cloud Self-Service Portal Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • How to use IObservable/IObserver with ConcurrentQueue or ConcurrentStack

    - by James Black
    I realized that when I am trying to process items in a concurrent queue using multiple threads while multiple threads can be putting items into it, the ideal solution would be to use the Reactive Extensions with the Concurrent data structures. My original question is at: http://stackoverflow.com/questions/2997797/while-using-concurrentqueue-trying-to-dequeue-while-looping-through-in-parallel/ So I am curious if there is any way to have a LINQ (or PLINQ) query that will continuously be dequeueing as items are put into it. I am trying to get this to work in a way where I can have n number of producers pushing into the queue and a limited number of threads to process, so I don't overload the database. If I could use Rx framework then I expect that I could just start it, and if 100 items are placed in within 100ms, then the 20 threads that are part of the PLINQ query would just process through the queue. There are three technologies I am trying to work together: Rx Framework (Reactive LINQ) PLING System.Collections.Concurrent structures

    Read the article

  • What's the right way to kill child processes in perl before exiting?

    - by rarbox
    I'm running an IRC Bot (Bot::BasicBot) which has two child processes running File::Tail but when exiting, they don't terminate. So I'm killling them using Proc::ProcessTable like this before exiting: my $parent=$$; my $proc_table=Proc::ProcessTable->new(); for my $proc (@{$proc_table->table()}) { kill(15, $proc->pid) if ($proc->ppid == $parent); } It works but I get this warning: 14045: !!! Child process PID:14047 reaped: 14045: !!! Child process PID:14048 reaped: 14045: !!! Your program may not be using sig_child() to reap processes. 14045: !!! In extreme cases, your program can force a system reboot 14045: !!! if this resource leakage is not corrected. What else can I do to kill child processes? The forked process is created using the forkit method in Bot::BasicBot.

    Read the article

  • Cleaning up temp folder after long-running subprocess exits

    - by dbr
    I have a Python script (running inside another application) which generates a bunch of temporary images. I then use subprocess to launch an application to view these. When the image-viewing process exists, I want to remove the temporary images. I can't do this from Python, as the Python process may have exited before the subprocess completes. I.e I cannot do the following: p = subprocess.Popen(["imgviewer", "/example/image1.jpg", "/example/image1.jpg"]) p.communicate() os.unlink("/example/image1.jpg") os.unlink("/example/image2.jpg") ..as this blocks the main thread, nor could I check for the pid exiting in a thread etc The only solution I can think of means I have to use shell=True, which I would rather avoid: cmd = ['imgviewer'] cmd.append("/example/image2.jpg") for x in cleanup: cmd.extend(["&&", "rm", x]) cmdstr = " ".join(cmd) subprocess.Popen(cmdstr, shell = True) This works, but is hardly elegant, and will fail with filenames containing spaces etc.. Basically, I have a background subprocess, and want to remove the temp files when it exits, even if the Python process no longer exists.

    Read the article

  • On WindowsMobile, how can i tell what other processes are reserving shared memory space?

    - by glutz78
    On WindowMobile 6.1, I am using VirtualAlloc to reserve 2MB chunks, which will return me an address from the large shared memory area so allocations do not count against my per process virtual space. (doc here: http://msdn.microsoft.com/en-us/library/aa908768.aspx) However, on some devices i notice that I am not able to reserve memory after a certain point. VirtualAlloc will return NULL (getlasterror() says out of memory). The only explanation for this that I see is that another process has already reserved a bunch of memory and my process is therefore unable to. Any idea where I can find a tool to show me the shared mem region of a WM device? Thanks.

    Read the article

  • Basics for implementing SSL on PHP Website

    - by KoolKabin
    Hi guys, I am here as a developer of a website. My website got different modules among which one function is to process credit card. In order to process credit card I need to implement SSL layer and process the pages. For rest of modules the SSL is optional. Now my points are: 1.) Is the location of file for http and https same? 2.) Can the session of http and https be shared? this is required as i need user login information and cart item information.

    Read the article

  • idomatic batch processing of text in emacs?

    - by Stephen
    In python, you might do something like fout = open('out','w') fin = open('in') for line in fin: fout.write(process(line)+"\n") fin.close() fout.close() (I think it would be similar in many other languages as well). In emacs lisp, would you do something like (find-file 'out') (setq fout (current-buffer) (find-file 'in') (setq fin (current-buffer) (while moreLines (setq begin (point)) (move-end-of-line 1) (setq line (buffer-substring-no-properties begin (point)) ;; maybe (print (process line) fout) ;; or (save-excursion (set-buffer fout) (insert (process line))) (setq moreLines (= 0 (forward-line 1)))) (kill-buffer fin) (kill-buffer fout) which I got inspiration (and code) from here. Or should I try something entirely different? And how to remove the "" from the print statement? Thanks!

    Read the article

  • Capture and display console output at the same time

    - by Patrick
    Hi, MSDN states that it is possible in .NET to capture the output of a process and display it in the console window at the same time. Normally when you set StartInfo.RedirectStandardOutput = true; the console window stays blank. As the MSDN site doesn't provide a sample for this I was wondering if anyone would have a sample or could point me to a sample? When a Process writes text to its standard stream, that text is normally displayed on the console. By redirecting the StandardOutput stream, you can manipulate or suppress the output of a process. For example, you can filter the text, format it differently, or write the output to both the console and a designated log file. MSDN This post is similar to http://stackoverflow.com/questions/786726/capture-standard-output-and-still-display-it-in-the-console-window by the way. But that post didn't end up with a working sample. Thanks a lot, Patrick

    Read the article

  • Converting raw bytes into audio sound

    - by Afro Genius
    In my application I inherit a javastreamingaudio class from the freeTTS package then bypass the write method which sends an array of bytes to the SourceDataLine for audio processing. Instead of writing to the data line, I write this and subsequent byte arrays into a buffer which I then bring into my class and try to process into sound. My application processes sound as arrays of floats so I convert to float and try to process but always get static sound back. I am sure this is the way to go but am missing something along the way. I know that sound is processed as frames and each frame is a group of bytes so in my application I have to process the bytes into frames somehow. Am I looking at this the right way? Thanx in advance for any help.

    Read the article

  • Source Control and SQL Development &ndash; Part 3

    - by Ajarn Mark Caldwell
    In parts one and two of this series, I have been specifically focusing on the latest version of SQL Source Control by Red Gate Software.  But I have been doing source-controlled SQL development for years, long before this product was available, and well before Microsoft came out with Database Projects for Visual Studio.  “So, how does that work?” you may wonder.  Well, let me share some of the details of how we do it where I work… The key to this approach is that everything is done via Transact-SQL script files; either natively written T-SQL, or generated.  My preference is to write all my code by hand, which forces you to become better at your SQL syntax.  But if you really prefer to use the Management Studio GUI to make database changes, you can still do that, and then you use the Generate Scripts feature of the GUI to produce T-SQL scripts afterwards, and store those in your source control system.  You can generate scripts for things like stored procedures and views by right-clicking on the database in the Object Explorer, and Choosing Tasks, Generate Scripts (see figure 1 to the left).  You can also do that for the CREATE scripts for tables, but that does not work when you have a table that is already in production, and you need to make just a simple change, such as adding a new column or index.  In this case, you can use the GUI to make the table changes, and then instead of clicking the Save button, click the Generate Change Script button (). Then, once you have saved the change script, go ahead and execute it on your development database to actually make the change.  I believe that it is important to actually execute the script rather than just click the Save button because this is your first test that your change script is working and you didn’t somehow lose a portion of the change. As you can imagine, all this generating of scripts can get tedious and tempting to skip entirely, so again, I would encourage you to just get in the habit of writing your own Transact-SQL code, and then it is just a matter of remembering to save your work, just like you are in the habit of saving changes to a Word or Excel document before you exit the program. So, now that you have all of these script files, what do you do with them?  Well, we organize ours into folders labeled ChangeScripts, Functions, Views, and StoredProcedures, and those folders are loaded into our source control system.  ChangeScripts contains all of the table and index changes, and anything else that is basically a one-time-only execution.  Of course you want to write your scripts with qualifying logic so that if a script were accidentally run more than once in a database, it would not crash nor corrupt anything; but these scripts are really intended to be run only once in a database. Once you have your initial set of scripts loaded into source control, then making changes, such as altering a stored procedure becomes a simple matter of checking out your CREATE PROCEDURE* script, editing it in SSMS, saving the change, executing the script in order to effect the change in your database, and then checking the script back in to source control.  Of course, this is where the lack of integration for source control systems within SSMS becomes an irritation, because this means that in addition to SSMS, I also have my source control client application running to do the check-out and check-in.  And when you have 800+ procedures like we do, that can be quite tedious to locate the procedure I want to change in source control, check it out, then locate the script file in my working folder, open it in SSMS, do the change, save it, and the go back to source control to check in.  Granted, it is not nearly as burdensome as, say, losing your source code and having to rebuild it from memory, or losing the audit trail that good source control systems provide.  It is worth the effort, and this is how I have been doing development for the last several years. Remember that everything that the SQL Server Management Studio does in modifying your database can also be done in plain Transact-SQL code, and this is what you are storing.  And now I have shown you how you can do it all without spending any extra money.  You already have source control, or can get free, open-source source control systems (almost seems like an oxymoron, doesn’t it) and of course Management Studio is free with your SQL Server database engine software. So, whether you spend the money on tools to make it easier, or not, you now have no excuse for not using source control with your SQL development. * In our current model, the scripts for stored procedures and similar database objects are written with an IF EXISTS…DROP… at the top, followed by the CREATE PROCEDURE… section, and that followed by a section that assigns permissions.  This allows me to run the same script regardless of whether the procedure previously existed in the database.  If the script was only an ALTER PROCEDURE, then it would fail the first time that procedure was deployed to a database, unless you wrote other code to stub it if it did not exist.  There are a few different ways you could organize your scripts for deployment, each with its own trade-offs, but I think it is absolutely critical that whichever way you organize things, you ensure that the same script is run throughout the deployment cycle, and do not allow customizations to creep in between TEST and PROD.  If you do, then you have broken the integrity of your deployment process because what you deployed to PROD was not exactly the same as what was tested in TEST, so you effectively have now released untested code into PROD.

    Read the article

  • Check to see if file transfer is complete

    - by Cymon
    We have a daily job that processes files delivered from an external source. The process usually runs fine without any issues but every once in a while we have an issue of attempting to process a file that is not completely transferred. The external source SCPs these files from a UNIX server to our Windows server. From there we try to process the files. Is there a way to check to see if a file is still being transferred? Does UNIX put a lock on a file while SCPing it that we could check on the Windows side?

    Read the article

< Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >