Search Results

Search found 31269 results on 1251 pages for 'process management'.

Page 408/1251 | < Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >

  • Accurev SCM

    - by FlySwat
    Does anyone use Accurev for Source Control Management? We are switching (eventually) from StarTeam to Accurev. My initial impression is that the GUI tool is severely lacking, however the underlying engine, and the branches as streams concept is incredible. The biggest difficulty we are facing is assessing our own DIY tools that interfaced with starteam, and either replacing them with DIY new tools, or finding and purchasing appropriate replacements. Additionally, is anyone using the AccuWork component for Issue management? Starteam had a very nice change request system, and AccuWork does not come close to matching it. We are evaluating either using Accuwork, or buying a 3rd party package such as JIRA. Opinions?

    Read the article

  • AudioRecord problems with non-HTC devices

    - by Marc
    I'm having troubles using AudioRecord. An example using some of the code derived from the splmeter project: private static final int FREQUENCY = 8000; private static final int CHANNEL = AudioFormat.CHANNEL_CONFIGURATION_MONO; private static final int ENCODING = AudioFormat.ENCODING_PCM_16BIT; private int BUFFSIZE = 50; private AudioRecord recordInstance = null; ... android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO); recordInstance = new AudioRecord(MediaRecorder.AudioSource.MIC, FREQUENCY, CHANNEL, ENCODING, 8000); recordInstance.startRecording(); short[] tempBuffer = new short[BUFFSIZE]; int retval = 0; while (this.isRunning) { for (int i = 0; i < BUFFSIZE - 1; i++) { tempBuffer[i] = 0; } retval = recordInstance.read(tempBuffer, 0, BUFFSIZE); ... // process the data } This works on the HTC Dream and the HTC Magic perfectly without any log warnings/errors, but causes problems on the emulators and Nexus One device. On the Nexus one, it simply never returns useful data. I cannot provide any other useful information as I'm having a remote friend do the testing. On the emulators (Android 1.5, 2.1 and 2.2), I get weird errors from the AudioFlinger and Buffer overflows with the AudioRecordThread. I also get a major slowdown in UI responsiveness (even though the recording takes place in a separate thread than the UI). Is there something apparent that I'm doing incorrectly? Do I have to do anything special for the Nexus One hardware?

    Read the article

  • SVN marking file production ready

    - by dan.codes
    I am kind of new to SVN, I haven't used it in detail basically just checking out trunk and committing then exporting and deploying. I am working on a big project now with several developers and we are looking for the best deployment process. The one thing we are hung up on is the best way to tag, branch and so on. We are used to CVS where all you have to do is commit a file and tag it as production ready and if not that code will not get deployed. I see that SVN handles tagging differently then CVS. I figure I am looking at this and making it overly complex. It seems the only way to work on a project and commit files without it being in the production code is to do it in a branch and then merge those changes when you are ready for it to be deployed. I am assuming you could also be working on other code that should be deployed so you would have to be switching between working copies, because otherwise you are working on a branch that isn't getting mixed in with the trunk or production branch? This process seems overly complex and I was wondering if anyone could give me what you think is the best process for managing this.

    Read the article

  • Automatic Deployment of Windows Application

    - by dileepkrishnan
    Hi, We have setup continuos integration in our development environment using SVN, CC.Net, MSBuild and Nunit. Now, we want to automate the process of moving (copying) builds from one stage to another like this: Whenever a new build succeeds in Dev, that should be copied automatically to the QA server (a folder on the QA server, to be exact) Whenever a QA build succeeds tests in QA, that QA build should be copied to the UAT server (a folder on the UAT server, to be exact). This should be implemented as a process (a CC task, for example) which we can start when QA succeeds. Whenever a UAT build succeeds tests in UAT, that should be copied to the PROD server (a folder on the PROD server, to be exact). This should be implemented as a process (a CC task, for example) which we can start when UAT succeeds. How do I implement this? Can this be done using CC.Net alone? Or, can this be done using MSBuild? Or, do I need to employ both? Please advise what exactly needs to be done. Thanks Dileep Krishnan

    Read the article

  • WCF Web Service - Service Unavaiable

    - by born to hula
    I have a WCF Web Service which is kept under an Application Pool on IIS. Lately I've been getting "Service Unavaiable" when I'm trying to make calls to this Web Service. The first thing I tried to do was restarting the Application Pool. I did it and after a couple of seconds, it crashed and stopped. Looking at the Event Viewer, I found these messages, which by the moment couldn't help me to find where the problem is. A process serving application pool 'X' reported a failure. The process id was '11616'. The data field contains the error number. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. After getting a couple of these, I got this one: Application pool 'X' is being automatically disabled due to a series of failures in the process(es) serving that application pool. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. I've already checked permissions and Application Pool configurations but everything seems to be OK. Have anyone been through this? Thanks in advance.

    Read the article

  • Why does GetWindowThreadProcessId return 0 when called from a service

    - by Marve
    When using the following class in a console application, and having at least one instance of Notepad running, GetWindowThreadProcessId correctly returns a non-zero thread id. However, if the same code is included in a Windows Service, GetWindowThreadProcessId always returns 0 and no exceptions are thrown. Changing the user the service launches under to be the same as the one running the console application didn't alter the result. What causes GetWindowThreadProcessId to return 0 even if it is provided with a valid hwnd? And why does it function differently in the console application and the service? Note: I am running Windows 7 32-bit and targeting .NET 3.5. public class TestClass { [DllImport("user32.dll")] static extern uint GetWindowThreadProcessId(IntPtr hWnd, IntPtr ProcessId); public void AttachToNotepad() { var processesToAttachTo = Process.GetProcessesByName("Notepad") foreach (var process in processesToAttachTo) { var threadID = GetWindowThreadProcessId(process.MainWindowHandle, IntPtr.Zero); .... } } } Console Code: class Program { static void Main(string[] args) { var testClass = new TestClass(); testClass.AttachToNotepad(); } } Service Code: public class TestService : ServiceBase { private TestClass testClass = new TestClass(); static void Main() { ServiceBase.Run(new TestService()); } protected override void OnStart(string[] args) { testClass.AttachToNotepad(); base.OnStart(args); } protected override void OnStop() { ... } }

    Read the article

  • Reporting Services "cannot connect to the report server database"

    - by Dano
    We have Reporting Services running, and twice in the past 6 months it has been down for 1-3 days, and suddenly it will start working again. The errors range from not being able to view the tree root in a browser, down to being able to insert parameters on a report, but crashing before the report can generate. Looking at the logs, there is 1 error and 1 warning which seem to correspond somewhat. ERROR:Event Type: Error Event Source: Report Server (SQL2K5) Event Category: Management Event ID: 107 Date: 2/13/2009 Time: 11:17:19 AM User: N/A Computer: ******** Description: Report Server (SQL2K5) cannot connect to the report server database. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. WARNING: always comes before the previous error Event code: 3005 Event message: An unhandled exception has occurred. Event time: 2/13/2009 11:06:48 AM Event time (UTC): 2/13/2009 5:06:48 PM Event ID: 2efdff9e05b14f4fb8dda5ebf16d6772 Event sequence: 550 Event occurrence: 5 Event detail code: 0 Process information: Process ID: 5368 Process name: w3wp.exe Account name: NT AUTHORITY\NETWORK SERVICE Exception information: Exception type: ReportServerException Exception message: For more information about this error navigate to the report server on the local server machine, or enable remote errors. During the downtime we tried restarting everything from the server RS runs on, to the database it calls to fill reports with no success. When I came in monday morning it was working again. Anyone out there have any ideas on what could be causing these issues? Edit Tried both suggestions below several months ago to no avail. This issue hasn't arisen since, maybe something out of my control has changed....

    Read the article

  • Strategy for developing a multi function asp.net web application

    - by user247023
    I'm about to start a new project and want some advice on how to implement. I need a web application which contains a booking module for reserving timeslots, and a time management module which will enable employees to clock in / clock out. If I am writing an update to the time managment module, I don't want to disrupt the booking engine availability by releasing a new solution containing both modules. to make things more difficult, there is some shared functionality like common users, roles and security. Here's a suggestion I've gotten, which sounds a bit cruddy, but may be functional. Write a 'container' web application which consists of basically a frame, and authentication / security features. This then has links which, will load the 2 independantly built and released web applications into the frame. I can see that say, if I wanted to update the time management module, I would only need to build and release this separately, and the rest of the solution would be 'untouched' Any better alternatives?

    Read the article

  • How to accept confirmation Automatically in PowerShell for Outlook

    - by user2919845
    How to accept confirmation Automatically in PowerShell for Outlook I have script for Export attachments from email from Outlook - see next It works correctly on one PC, but on another PC is there a problem: Outlook gives message and wants answer: Permit Denny Help If I manually click on Permit or Denny it works correctly. I want to automate it. Can you give me some suggestion how to do it in PowerShell? I have tried to set Outlook to not give this message but I didn’t success. My script: # <-- Script ---------> # script works with outlook Inbox folder # check if email have attachments with ".txt" and save those attachments to $filepath # path for exported files - attachments $filepath = "d:\Exported_files\" # create object outlook $o = New-Object -comobject outlook.application $n = $o.GetNamespace("MAPI") # $f - folder „dorucena posta“ 6 - Inbox $f = $n.GetDefaultFolder(6) # 6 - Inbox # select newest 10 emails, from it olny this one with attachments $f.Items| select -last 10| Where {$_.Attachments}| foreach { # process only unreaded mail if($_.unread -eq $True) { # processed mail set as read, not to process this mail again next day $_.unread = $False $SenderName = $_.SenderName Write-Host "Email from: ", $SenderName # process all attachments $_.attachments|foreach { $a = $_.filename If ($a.Contains(".txt")) { Write-Host $SenderName," ", $a # copy *.txt attachments to folder $filepath $_.saveasfile((Join-Path $filepath "$a")) } } } } Write-Host "Finish" # <------ End Script ---------------------------------->

    Read the article

  • Cookie blocked/not saved in IFRAME in Internet Explorer

    - by Piskvor
    I have two websites, let's say they're example.com and anotherexample.net. On anotherexample.net/page.html, I have an IFRAME SRC="http://example.com/someform.asp". That IFRAME displays a form for the user to fill out and submit to http://example.com/process.asp. When I open the form ("someform.asp") in its own browser window, all works well. However, when I load someform.asp as an IFRAME in IE 6 or IE 7, the cookies for example.com are not saved. In Firefox this problem doesn't appear. For testing purposes, I've created a similar setup on http://newmoon.wz.cz/test/page.php . example.com uses cookie-based sessions (and there's nothing I can do about that), so without cookies, process.asp won't execute. How do I force IE to save those cookies? Results of sniffing the HTTP traffic: on GET /someform.asp response, there's a valid per-session Set-Cookie header (e.g. Set-Cookie: ASPKSJIUIUGF=JKHJUHVGFYTTYFY), but on POST /process.asp request, there is no Cookie header at all. Edit3: some AJAX+serverside scripting is apparently capable to sidestep the problem, but that looks very much like a bug, plus it opens a whole new set of security holes. I don't want my applications to use a combination of bug+security hole just because it's easy. Edit: the P3P policy was the root cause, full explanation below.

    Read the article

  • How can I refactor this to work without breaking the pattern horribly?

    - by SnOrfus
    I've got a base class object that is used for filtering. It's a template method object that looks something like this. public class Filter { public void Process(User u, GeoRegion r, int countNeeded) { List<account> selected = this.Select(u, r, countNeeded); // 1 List<account> filtered = this.Filter(selected, u, r, countNeeded); // 2 if (filtered.Count > 0) { /* do businessy stuff */ } // 3 if (filtered.Count < countNeeded) this.SendToSuccessor(u, r, countNeeded - filtered) // 4 } } Select(...), Filter(...) are protected abstract methods and implemented by the derived classes. Select(...) finds objects in the based on x criteria, Filter(...) filters those selected further. If the remaining filtered collection has more than 1 object in it, we do some business stuff with it (unimportant to the problem here). SendToSuccessor(...) is called if there weren't enough objects found after filtering (it's a composite where the next class in succession will also be derived from Filter but have different filtering criteria) All has been ok, but now I'm building another set of filters, which I was going to subclass from this. The filters I'm building however would require different params and I don't want to just implement those methods and not use the params or just add to the param list the ones I need and have them not used in the existing filters. They still perform the same logical process though. I also don't want to complicated the consumer code for this (which looks like this) Filter f = new Filter1(); Filter f2 = new Filter2(); Filter f3 = new Filter3(); f.Sucessor = f2; f2.Sucessor = f3; /* and so on adding filters as successors to previous ones */ foreach (User u in users) { foreach (GeoRegion r in regions) { f.Process(u, r, ##); } } How should I go about it?

    Read the article

  • Documenting software architectures that serve multiple markets

    - by wsb3383
    Hello, I'm the lead developer/architect wanna-be on a J2EE based system/platform at work that serves both real estate and automotive markets. The systems consists of a set of database back ends, web services and two web clients. The platform ends up serving 3 different products: an internal vehicle inventory system for use by company analysts, an external dealer management system (commercialized product), and a real estate inventory system (commercialized). In other words, it follows a software product lines approach....My question is, I'm having trouble communicating to other technical and some business people how this platform architecture is one system that serves multiple markets (by leveraging some existing assets combined with minor modifications)....Is there a formal modeling language that can simplify communicating this intent? I should note that I haven't read much about software product lines, so I'm not sure if there is actually a standard modeling approach to SPL that i'm not aware of....I'm also interested in knowing if there are special configuration management practices for such systems. thanks,

    Read the article

  • GUI blocked while running silent app VC++

    - by deb
    Hi, I have built a GUI interface in C++ (Windows XP, visual c++ 2008). There you can configure some parameters and when I click on the OK button, a silent application is launched (and uses the values setted). When I do this, the GUI frozes and even dissappears if you switch to other windows(it's still there, but you can only see a white space), when the other application's finished the GUI works again. This is the correct behaviour, I don't want the user to be able to edit the fields... but it's a bit ugly when you can't see the GUI. Does anybody know an easy way of being able to switch to other windows and being able to see the the GUI when you switch back? Thanks in advance Edited: Hi, I tried doing this, but the problem is that to run the apps in background I had a function that uses CreateProcess. So both ways the GUI gets frozen: if I create a Thread that creates the process and if I creathe the process directly. Then I wait for the process to finish: if (!CreateProcess( NULL, Args, NULL, NULL, FALSE, CREATE_NEW_CONSOLE, NULL, NULL, &StartupInfo, &ProcessInfo)) { return GetLastError(); } WaitForSingleObject(ProcessInfo.hProcess, INFINITE); if(!GetExitCodeProcess(ProcessInfo.hProcess, &rc)) rc = 0; Any idea?

    Read the article

  • User to kernel mode big picture?

    - by fsdfa
    I've to implement a char device, a LKM. I know some basics about OS, but I feel I don't have the big picture. In a C programm, when I call a syscall what I think it happens is that the CPU is changed to ring0, then goes to the syscall vector and jumps to a kernel memmory space function that handle it. (I think that it does int 0x80 and in eax is the offset of the syscall vector, not sure). Then, I'm in the syscall itself, but I guess that for the kernel is the same process that was before, only that it is in kernel mode, I mean the current PCB is the process that called the syscall. So far... so good?, correct me if something is wrong. Others questions... how can I write/read in process memory?. If in the syscall handler I refer to address, say, 0xbfffffff. What it means that address? physical one? Some virtual kernel one?

    Read the article

  • heroku logs --ps run showign nothing

    - by Zarne Dravitzki
    I have two running apps on heroku staging and production. They are near identical enviornments. (Staging has extra configs IE RailsFootnotes, Bullet gem) When I run heroku logs --ps run --app jl-staging Returns as logs like 2012-08-30T01:30:42+00:00 heroku[run.1]: Starting process with command `bundle exec rake jewellover:warn_users` This log is a Task set to run with Heroku Schedular Free. Everything Works perfect but when I do the same with heroku logs --ps run --app jl-production There are no results. No heroku[run.1] process logs. Both environments have the same scheduled tasks, albeit at different times but none the less both run scheduled tasks at specified times. Is there something im missing about heroku[run.1] processes in production env? Does heroku only keep the -ps logs for a certain amount of time? It seems to show less activity than the normal logs. Maybe only show 24hrs worth of logs rather than Last 100 logs... I need to log and debug the [run.1] process from the production env... specifically the jewellover:warn_users task. any ideas?

    Read the article

  • Thoughts on streamlining multiple .Net apps

    - by John Virgolino
    We have a series of ASP.Net applications that have been written over the course of 8 years. Mostly in the first 3-4 years. They have been running quite well with little maintenance, but new functionality is being requested and we are running into IDE and platform issues. The apps were written in .Net 1.x and 2.x and run in separate spaces but are presented as a single suite of applications which use a common navigation toolbar (implemented as a user control). Every time we want to add something to a menu in the nav we have to modify it in all the apps which is a pain. Also, the various versions of Crystal reports and that we used tables to organize the visual elements and we end up with a mess, especially with all the multi-platform .Net versions running. We need to streamline the suite of apps and make it easier to add on new apps without a hassle. We also need to bring all these apps under one .Net platform and IDE. In addition, there is a WordPress blog styled to match the style of the application suite "integrated" into the UI and a link to a MediaWiki Wiki application as well. My current thinking is to use an open source content management system (CMS) like Joomla (PHP based unfortunately, but it works well) as the user interface framework for style templating and menu management. Joomla's article management would allow us to migrate the Wiki content into articles which could be published without interfering with the .Net apps. Then essentially use an IFrame within an "article" to "host" the .Net application, then... Upgrade the .Net apps to VS2010, strip out all the common header/footer controls and migrate the styles to use the style sheets used in the CMS. As I write this, I certainly realize this is a lot of work and there are optimization issues which this may cause as well as using IFrames seems a bit like cheating and I've read about issues with IFrames. I know that we could use .Net application styling, but it seems like a lot more work (not sure really). Also, the use of a CMS to handle the blog and wiki also seems appealing, unless there is a .Net CMS out there that can handle all of these requirements. Given this information, I am looking to know if I am totally going in the wrong direction? We tried to use open source and integrate it over time, but not this has become hard to maintain. Am I not aware of some technology out there that will meet our requirements? Did we do this right and should we just focus on getting the .Net streamlined? I understand that no matter what we do, it's going to be a lot of work. The communities considerable experience would be helpful. Thanks!! PS - A complete rewrite is not an option.

    Read the article

  • CMS for SmartPhones

    - by dde
    The company I work for has a document management and retrieval system. We are noticing employees use more their smartphones than their laptops, but they cannot access the document management system. So, we are thinking about a CMS, with persistent storage, perhaps developed in Java. I just started looking into Jease and dotCMS, and also checked here those recommendations in questions like "Best Open Source Java CMS", etc I find some CMSs too bulky for simple stuff like what we need which is basically document download/edit/upload, and some simple collaboration and personalization stuff. Smartphones of choice in our workforce are Nokia, Blackberry and IPhone, The question is: are there java based with persistent db CMSs aimed at Smartphones right now? I need replication of the complete database and run website offline.

    Read the article

  • Why oh why doesn't my asp.net treeview update?

    - by Brendan
    I'm using an ASP.net treeview on a page with a custom XmlDataSource. When the user clicks on a node of the tree, a detailsview pops up and edits a bunch of things about the underlying object. All this works properly, and the underlying object gets updated in my background object-management classes. Yay! However, my treeview just isn't updating the display. Either immediately (which i would like it to), or on full page re-load (which is the minimal useful level i need it to be at). Am i subclassing XmlDataSource poorly? I really don't know. Can anyone point me in a good direction? Thanks! The markup looks about like this (chaff removed): <data:DefinitionDataSource runat="server" ID="DefinitionTreeSource" RootDefinitionID="uri:1"></data:DefinitionDataSource> <asp:TreeView ID="TreeView" runat="server" DataSourceID="DefinitionTreeSource"> <DataBindings> <asp:TreeNodeBinding DataMember="definition" TextField="name" ValueField="id" /> </DataBindings> </asp:TreeView> <asp:DetailsView ID="DetailsView1" runat="server" AutoGenerateRows="False" DataKeyNames="Id" DataSourceID="DefinitionSource" DefaultMode="Edit"> <Fields> <asp:BoundField DataField="Name" HeaderText="Name" HeaderStyle-Wrap="false" SortExpression="Name" /> <asp:CommandField ShowCancelButton="False" ShowInsertButton="True" ShowEditButton="True" ButtonType="Button" /> </Fields> </asp:DetailsView> And the DefinitionTreeSource code looks like this: public class DefinitionDataSource : XmlDataSource { public string RootDefinitionID { get { if (ViewState["RootDefinitionID"] != null) return ViewState["RootDefinitionID"] as String; return null; } set { if (!Object.Equals(ViewState["RootDefinitionID"], value)) { ViewState["RootDefinitionID"] = value; DataBind(); } } } public DefinitionDataSource() { } public override void DataBind() { base.DataBind(); setData(); } private void setData() { String defXML = "<?xml version=\"1.0\" ?>"; Test.Management.TestManager.Definition root = Test.Management.TestManager.Definition.GetDefinitionById(RootDefinitionID); if (root != null) this.Data = defXML + root.ToXMLString(); else this.Data = defXML + "<definition id=\"null\" name=\"Set Root Node\" />"; } } }

    Read the article

  • Java Input/Output streams for unnamed pipes created in native code?

    - by finrod
    Is there a way to easily create Java Input/Output streams for unnamed pipes created in native code? Motivation: I need my own implementation of the Process class. The native code spawns me a new process with subprocess' IO streams redirected to unnamed pipes. Problem: The file descriptors for correct ends of those pipes make their way to Java. At this point I get stuck as I cannot create a new FileDescriptor which I could pass to FileInput/FileOutput stream. I have used reflection to get around the problem and got communication with a simple bouncer sub-process running. However I have a notion that it is not the cleanest way to go. Have you used this approach? Do you see any problems with this approach? (the platform will never change) Searching around the internets revealed similar solution using native code. Any thoughts before I dive into heavy testing of this approach are very welcome. I would like to give a shot to existing code before writing my own IO stream implementations... Thank you.

    Read the article

  • Ensuring that all callbacks were completed before sending a new request through a DuplexChannel usin

    - by Etan
    I am experiencing some issues when using a Callback in a WCF project. First, the server invokes some function Foo on the client which then forwards the request to a Windows Forms GUI: GUI CLASS delegate void DoForward(); public void ForwardToGui() { if (this.cmdSomeButton.InvokeRequired) { DoForward d = new DoForward(ForwardToGui); this.Invoke(d); } else { Process(); // sets result variable in callback class as soon as done } } } CALLBACK CLASS object _m = new object(); private int _result; public int result { get { return _result; } set { _result = value; lock(_m) { Monitor.PulseAll(_m); } } } [OperationContract] public int Foo() { result = 0; Program.Gui.ForwardToGui(); lock(_m) { Monitor.Wait(_m, 30000); } return result; } The problem now is that the user should be able to cancel the process, which doesn't work properly: SERVER INTERFACE [OperationContract] void Cleanup(); GUI CLASS private void Gui_FormClosed(object sender, EventArgs e) { Program.callbackclass.nextAction = -1; // so that the monitor pulses and Foo() returns Program.server.Cleanup(); } The problem with this is that Cleanup() hangs. However, when I close the form when Process() is not running, it works properly. The source seems to be that the Cleanup() is called before the monitor pulses etc and therefore a new request is sent to the server before the last request from the server has not yet been responded. How can I solve this problem? How can I ensure before calling Cleanup() that no Foo() is currently being executed?

    Read the article

  • Threaded Django task doesn't automatically handle transactions or db connections?

    - by Gabriel Hurley
    I've got Django set up to run some recurring tasks in their own threads, and I noticed that they were always leaving behind unfinished database connection processes (pgsql "Idle In Transaction"). I looked through the Postgres logs and found that the transactions weren't being completed (no ROLLBACK). I tried using the various transaction decorators on my functions, no luck. I switched to manual transaction management and did the rollback manually, that worked, but still left the processes as "Idle". So then I called connection.close(), and all is well. But I'm left wondering, why doesn't Django's typical transaction and connection management work for these threaded tasks that are being spawned from the main Django thread?

    Read the article

  • Loading a Windows DLL in Java and initiate a class from it

    - by Joy
    I have a Windows DLL file from .NET namely "System.Management.dll". I work with it using the code I write below: ManagementObjectSearcher searcher = new ManagementObjectSearcher("root\\CIMV2", "SELECT * FROM Win32_LogicalDisk WHERE Name = 'C:'"); foreach (ManagementObject queryObj in searcher.Get()) { Console.WriteLine("Win32_LogicalDisk instance: "); if (queryObj["VolumeSerialNumber"] != null) { Console.WriteLine("Drive Name : " + queryObj["Name"]); Console.WriteLine("VolumeSerialNumber:", queryObj["VolumeSerialNumber"]); SysdriveSerial = queryObj["VolumeSerialNumber"].ToString(); } } Now I need this piece of code to be in Java. So can I do this? Without anything like c++ unmanaged code. I don't want to use c++ unmanaged code to call to this dll. I want something like this : public class CallToCsharp { private static native void ManagementObjectSearcher(); public static void main(String[] args) { System.loadLibrary("System.Management"); System.out.println("Loaded"); ManagementObjectSearcher searcher = new ManagementObjectSearcher("root\\CIMV2", "SELECT * FROM Win32_LogicalDisk WHERE Name = 'C:'"); } }

    Read the article

  • What's the best Linux backup solution?

    - by Jon Bright
    We have a four Linux boxes (all running Debian or Ubuntu) on our office network. None of these boxes are especially critical and they're all using RAID. To date, I've therefore been doing backups of the boxes by having a cron job upload tarballs containing the contents of /etc, MySQL dumps and other such changing, non-packaged data to a box at our geographically separate hosting centre. I've realised, however that the tarballs are sufficient to rebuild from, but it's certainly not a painless process to do so (I recently tried this out as part of a hardware upgrade of one of the boxes) long-term, the process isn't sustainable. Each of the boxes is currently producing a tarball of a couple of hundred MB each day, 99% of which is the same as the previous day partly due to the size issue, the backup process requires more manual intervention than I want (to find whatever 5GB file is inflating the size of the tarball and kill it) again due to the size issue, I'm leaving stuff out which it would be nice to include - the contents of users' home directories, for example. There's almost nothing of value there that isn't in source control (and these aren't our main dev boxes), but it would be nice to keep them anyway. there must be a better way So, my question is, how should I be doing this properly? The requirements are: needs to be an offsite backup (one of the main things I'm doing here is protecting against fire/whatever) should require as little manual intervention as possible (I'm lazy, and box-herding isn't my main job) should continue to scale with a couple more boxes, slightly more data, etc. preferably free/open source (cost isn't the issue, but especially for backups, openness seems like a good thing) an option to produce some kind of DVD/Blu-Ray/whatever backup from time to time wouldn't be bad My first thought was that this kind of incremental backup was what tar was created for - create a tar file once each month, add incrementally to it. rsync results to remote box. But others probably have better suggestions.

    Read the article

  • What are Sharepoint(MOSS 2007) Developement/Deployment best practices.

    - by Satish
    We are deploying sharepoint MOSS 2007 at our work. I'm trying to come up with a sharepoint development and deployment methodology. We have Dev/QA/Prod environments and I need a way, preferably automated to deploy changes from Dev to QA and from there to prod. We are creating site collections web parts etc. Some of it is done directly within sharepoint, some through Sharepoint designer or visual studio. I'm looking for a way to extract this and deploy it to other enviornments. I tried stsadm backup/restore import/export etc but they all move the data along with it as well. I just need the structure deployed. Content deployment paths and jobs does the same thing as well. We use MSBuild & Curisecontrol.net for other .net projects to automate build/deployment process. I'm looking for something similar with sharepoint if possible. What are your best practices for this? Since my team is learning we don't have a defined process and we are open to change our development process if needed.

    Read the article

< Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >