Search Results

Search found 24382 results on 976 pages for 'tutor process procedure f'.

Page 374/976 | < Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >

  • Trapping messages in .NET

    - by user350632
    How can i trap a Windows system message (like WM_SETTEXT) that was sent by some window (VLC player window in my case)? I've tried to inherit NativeWindow class and override WndProc like this: class VLCFilter : NativeWindow { System.IntPtr iHandle; const int WM_SETTEXT = 0x000C; public VLCFilter() { Process p = Process.GetProcessesByName("vlc")[0]; iHandle = p.MainWindowHandle; } protected override void WndProc(ref Message aMessage) { base.WndProc(ref aMessage); if (aMessage.HWnd != iHandle) return false; if (aMessage.Msg == WM_SETTEXT) { MessageBox.Show("VLC window text changed!"); } } } I have checked with Microsoft Spy++ that WM_SETTEXT message is sent by VLC player but my code doesn't seem to get the work done. I've refered mainly to: http://www.codeproject.com/kb/dotnet/devicevolumemonitor.aspx I'm trying to make this work for some time with no success. What am I doing wrong? What I am not doing? Maybe there is easier way to do this? My initial goal is to catch when VLC player (that could be playing somewhere in the background and is not emmbed in my application) repeats its playback (have noticed that WM_SETTEXT message is sent then and I'm trying to find it out like this).

    Read the article

  • SCCM 2012: How to properly update the content of an application?

    - by Omnomnomnom
    I recently set up a new SCCM 2012 environment at my workplace and now we are creating our applications for distribution. Some applications are set up using a script. When during testing, something was not right and the content of the application needs to be changed. The distribution point keeps on serving the old content to the clients. I was wondering what the proper procedure is for updating the DP's when the content of an application changes. I have tried redistributing to the distribution points and deleting old revisions but to no avail.

    Read the article

  • How to built a hackintosh with original purchased Mac lion OS? [closed]

    - by Nabayan
    Does anyone here ever built Hackintosh by themselves? Actually I'me new in this technology and trying to get hold of Mac system where I can start coding in Xcoder. I've tried in VMplayer and was able to run the Mac environment but unfortunately was not able to run X-code. I've a original MacOS downloaded few days ago and want to built a hackintosh. There is no suitable information that came across to me which give step to step full proof procedure to built a hakintosh by self. It would be really really great if anyone can guide me here. Thanks. Naba

    Read the article

  • Named pipe stalls threads?

    - by entens
    I am attempting to push updates into a process via a named pipe, but in doing so my process loop now seams to stall on while ((line = sr.ReadLine()) != null). I'm a little mystified as to what might be wrong as this is my first foray into named pipes. void RefreshThread() { using (NamedPipeServerStream pipeStream = new NamedPipeServerStream("processPipe", PipeDirection.In)) { pipeStream.WaitForConnection(); using (StreamReader sr = new StreamReader(pipeStream)) { for (; ; ) { if (StopThread == true) { StopThread = false; return; // exit loop and terminate the thread } // push update for heartbeat int HeartbeatHandle = ItemDictionary["Info.Heartbeat"]; int HeartbeatValue = (int)Config.Items[HeartbeatHandle].Value; Config.Items[HeartbeatHandle].Value = ++HeartbeatValue; SetItemValue(HeartbeatHandle, HeartbeatValue, (short)0xC0, DateTime.Now); string line = null; while ((line = sr.ReadLine()) != null) { // line is in the format: item, value, timestamp string[] parts = line.Split(','); // push update and store value in item cache int handle = ItemDictionary[parts[0]]; object value = parts[1]; Config.Items[handle].Value = int.Parse(value); DateTime timestamp = DateTime.FromBinary(long.Parse(parts[2])); SetItemValue(handle, value, (short)0xC0, timestamp); } Thread.Sleep(500); } } } }

    Read the article

  • Memory mapping of files and system cache behavior in WinXP

    - by Canopus
    Our application is memory intensive and deals with reading a large number of disk files. The total load can be more than 3 GB. There is a custom memory manager that uses memory mapped files to achieve reading of such a huge data. The files are mapped into the process memory space only when needed and with this the process memory is well under control. But what is observed is, with memory mapping, the system cache keeps on increasing until it occupies the available physical memory. This leads to the slowing down of the entire system. My question is how to prevent system cache from hogging the physical memory? I attempted to remove the file buffering (by using FILE_FLAG_NO_BUFFERING ), but with this, the read operations take considerable amount of time and slows down the application performance. How to achieve the scalability without sacrificing much on performance. What are the common techniques used in such cases? I dont have a good understanding of the WinXP OS caching behavior. Any good links explaining the same would also be helpful.

    Read the article

  • Writing a synchronized thread-safety wrapper for NavigableMap

    - by polygenelubricants
    java.util.Collections currently provide the following utility methods for creating synchronized wrapper for various collection interfaces: synchronizedCollection(Collection<T> c) synchronizedList(List<T> list) synchronizedMap(Map<K,V> m) synchronizedSet(Set<T> s) synchronizedSortedMap(SortedMap<K,V> m) synchronizedSortedSet(SortedSet<T> s) Analogously, it also has 6 unmodifiedXXX overloads. The glaring omission here are the utility methods for NavigableMap<K,V>. It's true that it extends SortedMap, but so does SortedSet extends Set, and Set extends Collection, and Collections have dedicated utility methods for SortedSet and Set. Presumably NavigableMap is a useful abstraction, or else it wouldn't have been there in the first place, and yet there are no utility methods for it. So the questions are: Is there a specific reason why Collections doesn't provide utility methods for NavigableMap? How would you write your own synchronized wrapper for NavigableMap? Glancing at the source code for OpenJDK version of Collections.java seems to suggest that this is just a "mechanical" process Is it true that in general you can add synchronized thread-safetiness feature like this? If it's such a mechanical process, can it be automated? (Eclipse plug-in, etc) Is this code repetition necessary, or could it have been avoided by a different OOP design pattern?

    Read the article

  • "Thread was being aborted" 0n large dataset

    - by Donaldinio
    I am trying to process 114,000 rows in a dataset (populated from an oracle database). I am hitting an error at around the 600 mark - "Thread was being aborted". All I am doing is reading the dataset, and I still hit the issue. Is this too much data for a dataset? It seems to load into the dataset ok though. I welcome any better ways to process this amount of data. rootTermsTable = entKw.GetRootKeywordsByCategory(catID); for (int k = 0; k < rootTermsTable.Rows.Count; k++) { string keywordID = rootTermsTable.Rows[k]["IK_DBKEY"].ToString(); ... } public DataTable GetKeywordsByCategory(string categoryID) { DbProviderFactory provider = DbProviderFactories.GetFactory(connectionProvider); DbConnection con = provider.CreateConnection(); con.ConnectionString = connectionString; DbCommand com = provider.CreateCommand(); com.Connection = con; com.CommandText = string.Format("Select * From icm_keyword WHERE (IK_IC_DBKEY = {0})",categoryID); com.CommandType = CommandType.Text; DataSet ds = new DataSet(); DbDataAdapter ad = provider.CreateDataAdapter(); ad.SelectCommand = com; con.Open(); ad.Fill(ds); con.Close(); DataTable dt = new DataTable(); dt = ds.Tables[0]; return dt; //return ds.Tables[0].DefaultView; }

    Read the article

  • Activator.CreateInstance uses a huge amount of memory

    - by Marco
    I have been playing a bit with Silverlight and try to port my Silverlight 3.0 application to Silverlight 4.0. My application loads different XAP files and upon a user request create an instance of a Xaml user control and adds it to the main container, in a sort of MEF approach in order I can have an extensible and pluggable application. The application is pretty huge and to keep acceptable the performances and the initial loading I have built up some helper classes to load in the background all pages and user controls that might be used later on. On Silverlight 3.0 everything was running smoothly without any problem so far. Switching to SL 4.0 I have noticed that when the process approaches to create the instances of the user controls using Activator.CreateInstance, the layout freezes unexpectedly for a minute and sometimes for more. Looking at the task manager the memory usage of IE jumps from 50MB to 400MB and sometimes to 1.5 GB. If the process won't take that much the layout is rendered properly and the memory falls back to 50 MB. Otherwise everything crashes due to out of memory exeption. Does anybody encountered the same problem? Or has anybody a solution about this tricky issue?

    Read the article

  • Get type of the parameter from list of objects, templates, C++

    - by CrocodileDundee
    This question follows to my previous question Get type of the parameter, templates, C++ There is the following data structure: Object1.h template <class T> class Object1 { private: T a1; T a2; public: T getA1() {return a1;} typedef T type; }; Object2.h template <class T> class Object2: public Object1 <T> { private: T b1; T b2; public: T getB1() {return b1;} } List.h template <typename Item> struct TList { typedef std::vector <Item> Type; }; template <typename Item> class List { private: typename TList <Item>::Type items; }; Is there any way how to get type T of an object from the list of objects (i.e. Object is not a direct parameter of the function but a template parameter)? template <class Object> void process (List <Object> *objects) { typename Object::type a1 = objects[0].getA1(); // g++ error: 'Object1<double>*' is not a class, struct, or union type } But his construction works (i.e. Object represents a parameter of the function) template <class Object> void process (Object *o1) { typename Object::type a1 = o1.getA1(); // OK }

    Read the article

  • Django | passing form values

    - by MMRUser
    I want to create a user sign up process that requires two different forms with the same data one (1st form) is for filling out the data and other one (2nd form) is for displaying the filled data as a summery (before actually saving the data) so then user can view what he/she has filled up... my problem is that how do I pass 1st form's data in to the 2nd one .. I have used the basic Django form manipulation mechanism and passed the form field values to the next form using Django template tags.. if request.method == 'POST': form = Users(request.POST) if form.is_valid(): cd = form.cleaned_data try: name = cd['fullName'] email = cd['emailAdd'] password1 = cd['password'] password2 = cd['password2'] phoneNumber = cd['phoneNumber'] return render_to_response('signup2.html', {'name': name, 'email': email, 'password1': password1, 'password2': password2, 'phone': phone, 'pt': phoneType}) except Exception, ex: return HttpResponse("Error %s" % str(ex)) and from the second from I just displayed those field values using tags and also used hidden fields in order to submit the form with values, like this: <label for="">Email:</label> {{ email }} <input type="hidden" id="" name="email" class="width250" value="{{ email }}" readonly /> It works nicely from the out look, but the real problem is that if someone view the source of the html he can simply get the password even hackers can get through this easily. So how do I avoid this issue.. and I don't want to use Django session since this is just a simple sign up process and no other interactions involved. Thanks.

    Read the article

  • ssh rsa keys expires after 5 years

    - by Kreker
    I'm using an old login with ssh-rsa public/private key and all was good. I noticed that a couple of days ago the authorizazion was avoid with message "server refused our keys". After diggin' I figured out that the couple of keys stop working after exactly 5 years of their creation. So I make a new pair of keys, take the public one, paste inside a file in ~/.ssh of the username that I'm using, converted it with ssh-keygen -if and paste the new file into authorized_keys but the I still get "server refused our key". It's ok to copy and paste the real key without transfer it? What I'm missing? This isn't my first time using a pair of keys and I follow the same procedure as described. I'm in doubt if I'm changing the correct authorized_keys file but I've take a look in /etc/passwd and see where is the home of the login which I'm using.

    Read the article

  • Information about PTE's (Page Table Entries) in Windows

    - by Patrick
    In order to find more easily buffer overflows I am changing our custom memory allocator so that it allocates a full 4KB page instead of only the wanted number of bytes. Then I change the page protection and size so that if the caller writes before or after its allocated piece of memory, the application immediately crashes. Problem is that although I have enough memory, the application never starts up completely because it runs out of memory. This has two causes: since every allocation needs 4 KB, we probably reach the 2 GB limit very soon. This problem could be solved if I would make a 64-bit executable (didn't try it yet). even when I only need a few hundreds of megabytes, the allocations fail at a certain moment. The second problem is the biggest one, and I think it's related to the maximum number of PTE's (page table entries, which store information on how Virtual Memory is mapped to physical memory, and whether pages should be read-only or not) you can have in a process. My questions (or a cry-for-tips): Where can I find information about the maximum number of PTE's in a process? Is this different (higher) for 64-bit systems/applications or not? Can the number of PTE's be configured in the application or in Windows? Thanks, Patrick PS. note for those who will try to argument that you shouldn't write your own memory manager: My application is rather specific so I really want full control over memory management (can't give any more details) Last week we had a memory overwrite which we couldn't find using the standard C++ allocator and the debugging functionality of the C/C++ run time (it only said "block corrupt" minutes after the actual corruption") We also tried standard Windows utilities (like GFLAGS, ...) but they slowed down the application by a factor of 100, and couldn't find the exact position of the overwrite either We also tried the "Full Page Heap" functionality of Application Verifier, but then the application doesn't start up either (probably also running out of PTE's)

    Read the article

  • Is rsync corrupting my RAR?

    - by Mark Henderson
    We have two qnap devices - one in our datacentre and one off-site. We have hundreds of password protected RAR files stored on the qnap that contain virtual machine image snapshots, with approx 20 of them being created each day. We synchronise the two devices using rsync, and it looks like all the files are being rsynced OK - they come over and have the same file size and all the files are present and accounted for. However, when I try to open the RAR files on the remote site, I get Cannot open \\qnap01\FromDatacentre\Snapshots\DB001SQL1-20110626.rar I can open the RAR files on the local site just fine, so I assume that something is getting mangled during the rsync procedure. However, the older files (pre 2011-06-20) work just fine, it's something that's only started happening in the last week. There haven't been (as far as I know) any changes to any of the devices, setup or configuration in that time. Obviously something has changed though. Where should I start investigating?

    Read the article

  • looking for advise on importing excel into mysql with php

    - by Ole Media
    Alright, see if I can pick your brains from you all. I'm currently working on a project where all the information comes from different clients, the only thing in common is that the received data is done with excel. The excel spread sheet that they present is just a bunch of references and codes, and the problem than I'm facing is that I need the references and codes to be entered in certain format in order for the website to work. The perfect situation will be to go to each client and teach how I would need the data, but I can't do that because of the large number of clients, and more importantly I will be interrupting their work flow. Each client has its own codes and reference model and they are not willing to change their process The good news is that there is a standard pattern for the codes, but I'm talking close to 200 thousand codes with a bunch of combination. They way that we are currently solving the problem is that we have a person who checks each excel sheet received, runs a few macros, and manually fixes those codes in which the macro was not able to fix. The person that is doing this, is already burn out and frustrated and I would like to automatize this process with php. Suggestions?

    Read the article

  • Recommendations for IPC between parent and child processes in .NET?

    - by Jeremy
    My .NET program needs to run an algorithm that makes heavy use of 3rd party libraries (32-bit), most of which are unmanaged code. I want to drive the CPU as hard as I can, so the code runs several threads in parallel to divide up the work. I find that running all these threads simultaneously results in temporary memory spikes, causing the process' virtual memory size to approach the 2 GB limit. This memory is released back pretty quickly, but occasionally if enough threads enter the wrong sections of code at once, the process crosses the "red line" and either the unmanaged code or the .NET code encounters an out of memory error. I can throttle back the number of threads but then my CPU usage is not as high as I would like. I am thinking of creating worker processes rather than worker threads to help avoid the out of memory errors, since doing so would give each thread of execution its own 2 GB of virtual address space (my box has lots of RAM). I am wondering what are the best/easiest methods to communicate the input and output between the processes in .NET? The file system is an obvious choice. I am used to shared memory, named pipes, and such from my UNIX background. Is there a Windows or .NET specific mechanism I should use?

    Read the article

  • Framework or tool for "distributed unit testing"?

    - by user262646
    Is there any tool or framework able to make it easier to test distributed software written in Java? My system under test is a peer-to-peer software, and I'd like to perform testing using something like PNUnit, but with Java instead of .Net. The system under test is a framework I'm developing to build P2P applications. It uses JXTA as a lower subsystem, trying to hide some complexities of it. It's currently an academic project, so I'm pursuing simplicity at this moment. In my test, I want to demonstrate that a peer (running in its own process, possibly with multiple threads) can discover another one (running in another process or even another machine) and that they can exchange a few messages. I'm not using mocks nor stubs because I need to see both sides working simultaneously. I realize that some kind of coordination mechanism is needed, and PNUnit seems to be able to do that. I've bumped into some initiatives like Pisces, which "aims to provide a distributed testing environment that extends JUnit, giving the developer/tester an ability to run remote JUnits and create complex test suites that are composed of several remote JUnit tests running in parallel or serially", but this project and a few others I have found seem to be long dead.

    Read the article

  • Is there a better way to keep track of session variable creation/access throughout different pages?

    - by Brandon
    Here's what I am working on. At my website I have multiple processes with each one containing multiple steps. Now in one of the processes, there is an error checking routine executed before proceeding to the next step of that process. A session var is set indicating the error status and it will either redirect back to the referrer or display the next page's contents. Now this kind of functionality, I believe, is common throughout web development. The issue that is occurring is that session vars are left around and are not being cleaned up properly. At times this introduces undesired behavior. My website is growing and I find that I am requiring more and more session vars to keep track of different system and error states. So I was thinking about creating a kind of "session variable keeper" to keep track of session var usage. The idea is fairly simple. It will have the notion of a context (e.g. registration process) and allow access to a predefined set of session vars within that context. In addition, the var and context will be paired with an action to proceed to some form of event handling. So if you haven't noticed I'm new to web development. Any thoughts or comments on the idea that I am proposing would be greatly appreciated. The back-end is written in PHP/MySQL.

    Read the article

  • MPI Odd/Even Compare-Split Deadlock

    - by erebel55
    I'm trying to write an MPI version of a program that runs an odd/even compare-split operation on n randomly generated elements. Process 0 should generated the elements and send nlocal of them to the other processes, (keeping the first nlocal for itself). From here, process 0 should print out it's results after running the CompareSplit algorithm. Then, receive the results from the other processes run of the algorithm. Finally, print out the results that it has just received. I have a large chunk of this already done, but I'm getting a deadlock that I can't seem to fix. I would greatly appreciate any hints that people could give me. Here is my code http://pastie.org/3742474 Right now I'm pretty sure that the deadlock is coming from the Send/Recv at lines 134 and 151. I've tried changing the Send to use "tag" instead of myrank for the tag parameter..but when I did that I just keep getting a "MPI_ERR_TAG: invalid tag" for some reason. Obviously I would also run the algorithm within the processors 0 but I took that part out for now, until I figure out what is going wrong. Any help is appreciated.

    Read the article

  • How do I redirect standard output to a file in Perl? [closed]

    - by rockyurock
    I want to send standard output to the file "my_output.txt" but failed. Here's the output: inside value loop ------------------------------------------------------------ Server listening on UDP port 5001 Receiving 1470 byte datagrams UDP buffer size: 108 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.16.2 port 5001 connected with 192.168.16.1 port 3189 [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 0.0- 5.0 sec 2.14 MBytes 3.61 Mbits/sec 0.369 ms 0/ 1528 (0%) inside value loop3 clue1 clue2 inside value loop4 one iperf completed *************************************** When I enable the local *STDOUT; in below code then I could see the above output on command prompt display (ofcourse server is sending some data): my $file = 'my_output.txt'; use Win32::Process; print"inside value loop\n"; # redirect stdout to a file #local *STDOUT; open STDOUT, '>', $file or die "can't redirect STDOUT to <$file> $!"; Win32::Process::Create(my $ProcessObj, "D:\\IOT_AUTOMATION_UTILITY\\_SATURDAY_09-04-10\\adb_cmd.bat", "adb shell /data/app/iperf -u -s -p 5001", 0, NORMAL_PRIORITY_CLASS, ".") || die ErrorReport(); #$alarm_time = $IPERF_RUN_TIME+10; #20sec #$ProcessObj->Wait(40); #print"inside value loop2\n"; #sleep $alarm_time; sleep 40; $ProcessObj->Kill(0); sub ErrorReport{ print Win32::FormatMessage( Win32::GetLastError() ); }

    Read the article

  • Think this is a naming problem

    - by RussP
    Must be really dumb today - sorry in advance; anyhow have this unordered list <ul> <li><div class="openuserform" >Info</div> <div class="userform"></div></li> <li><div class="openuserform" >Appearence</div> <div class="userform"></div></li> <li><div class="openuserform" >Pages</div> <div class="userform"></div></li> <li><div class="openuserform" >Services</div> <div class="userform"></div></li> <li><div class="openuserform" >Community</div> <div class="userform"></div></li> </ul> which on click <div class="openuserform" > I want to load a separate form e.g. $('.openusersform').live('click',(function(){ $('.userform').load('form page.php'); }); OK I can get the forms to load in the right div using $(this).next('.userform').show(); $('.userform').load('form page.php'); but it's very ugly (I think) and I can only every get the first form to process propery. It is laid out like (ul,li etc.) this so I can have each loaded form aligned under the relevant li. But I think there has to be a better way as I do not seem to get any if(){} stemnents working to process the forms i.e. if(form1){ $.ajax etc } if (form2) {more ajax} Suggestions please - thanks

    Read the article

  • How can I write to a single xml file from two programs at the same time?

    - by Tom Bushell
    We recently started working with XML files, after many years of experience with the old INI files. My coworker found some CodeProject sample code that uses System.Xml.XmlDocument.Save. We are getting exceptions when two programs try to write to the same file at the same time. System.IO.IOException: The process cannot access the file 'C:\Test.xml' because it is being used by another process. This seems obvious in hindsight, but we had not anticipated it because accessing INI files via the Win32 API does not have this limitation. I assume there's some arbitration done by the Win32 calls that work at a higher level than the the XmlDocument.Save method. I'm hoping there are higher level XML routines somewhere in the .Net library that work similarily to the Win32 functions, but don't know where to start looking. Or maybe we can set up our file access permissions to allow multiple programs to write to the same file? Time is short (like almost all SW projects), and if we can't find a solution quickly, we'll have to hold our noses and go back to INI files.

    Read the article

  • Do I lose the benefits of macro recording if I develop Excel apps in Visual Studio?

    - by DanM
    I've written lots of Excel macros in the past using the following development process: Record a macro. Open the VBA editor. Edit the macro. I'm now experimenting with a Visual Studio 2008 "Excel 2007 Add-In" project (C#), and I'm wondering if I will have to give up this development process. Questions: I know I can still record macros using Excel, but is there any way to access the resulting code in Visual Studio? Or do I just have to copy and paste then C#-ize it? What happens with my "Personal Macro Workbook"? Can I use the macros I have stored in there within C#? Or is there some way to convert them to C#? If there is some support for opening and editing VBA macros in Visual Studio, can you provide a very brief summary of how it works or point me to a good reference? Do you have any other tips for transitioning from writing macros in VBA using Excel's built-in editor to writing them in C# with Visual Studio?

    Read the article

  • "Ghost" values in PHP/Smarty.

    - by Kyle Sevenoaks
    I've been working on a site for a while changing the layout and skin of a webshop checkout process. I've noticed that if you go all the way through the process until the last page, then click the link to go back to the view products page, the delivery method price displays underneath the navigation buttons, until you refresh and it goes away again. I've downloaded both sourced from the browser (Chrome, but this bug applies to all browsers) and used a file difference tool to display the differences, the result being only: < error.html vs > normal.html 34c34 < <link href="gzip.php?file=167842c1496093fbcd391b41cf7b03da.css&time=1272272181" rel="Stylesheet" type="text/css"/> --- > <link href="gzip.php?file=167842c1496093fbcd391b41cf7b03da.css&time=1272272348" rel="Stylesheet" type="text/css"/> Which is just the way it zips up the CSS stylesheets. (afaik) Has anyone ever encountered such a problem, or anything similar? Normal: Error: I can't even hazard a guess as to what is causing this, at all. I've searched over Google for anything and come up with nothing. What could be causing this? The site in question is Euroworker.no. HTML @ Pastebin. Smarty snippet: {if !$CANONICAL} {canonical}{self}{/canonical} {/if} <link rel="canonical" href="{$CANONICAL}" /> <!-- Css includes --> {includeCss file="frontend/Frontend.css"} {includeCss file="backend/stat.css"} {if {isRTL}} {includeCss file="frontend/FrontendRTL.css"} {/if} {compiledCss glue=true nameMethod=hash} <!--[if lt IE 8]> <link href="stylesheet/frontend/FrontendIE.css" rel="Stylesheet" type="text/css"/> {if $ieCss} <link href="{$ieCss}" rel="Stylesheet" type="text/css"/> {/if} <![endif]--> Thanks.

    Read the article

  • Implimenting Zend MVC for my existing site-first step?

    - by Joel
    Hi guys, OK-newbie question here. I'll try not to bombard SO with lots of questions-and hopefully this first one will show me the method I'll need to follow for subsequent conversions. I have a web-based calendar system that I developed, but it was coded for me procedurally (using PHP). I'm now working on learning OO and wanting to integrate this site into my localhost Zend Framework and slowly start converting parts to OO and the Zend Framework MVC process in particular. As I've said before, I understand that this will be a slow process, and when I'm done, I still probably won't have anything as OO friendly as if I had rewritten it from scratch, but I'd like to use this as a learning experience. So, I have dropped the whole site into my localhose/zend/Public folder, and everything is showing up great and linking to the database, etc. My question is-what would be the easiest first component to switch over to the MVC model? This site has a bit of everything-forms, login, authentication, some jQuery, etc. Can anyone point to a tutorial that would address what I'm trying to do? If indeed, a form would be one of the simpler things to switch, can someone walk me through those changes? Another idea is changing over all the header info, etc? Thanks for any pointers on where to start! EDIT: Also, I understand that SO is mainly for specific coding questions-I'm happy to share specific code, once I have an idea about which section to tackle first...

    Read the article

  • WSUS Updates - Best Practice

    - by What'sTheStoryWishBone
    We have an isolated enviornment of a few hundred servers in which we use WSUS to push updates too. We have thousands of updates which to manage and push to devices testing along the way to ensure the update will not break anything. What are the best practices that you all follow in your enteprise networks to ensure an update does not go out to all the machines that will break something? We currently have ours broken into customized groups for each type of machine. There is one "Test Group" which has one PC of each type which we apply updates to for error checking. Is this a similar procedure others follow or is their an easier safer way to manage the thousands of WSUS updates?

    Read the article

< Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >