Search Results

Search found 48396 results on 1936 pages for 'first person shooter'.

Page 194/1936 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • How to Keep Video and Audio in Sync When Ripping a DVD?

    - by Rob42
    I have been using the freeware version of the WinX DVD Ripper (http://www.winxdvd.com/dvd-ripper/) to rip some DVDs. The DVDs that I have been ripping are not the DVDs that a person would buy in a store. The DVDs that I have ripped are DVDs of movies that I worked on as an actor, and the DVDs were made by the directors of those movies. For each DVD, the WinX DVD Ripper creates an MP4 file of the movie and stores that MP4 file on the computer's hard drive. Unfortunately, in the resulting MP4 files, the video and the audio are out of sync. The video is ahead of the audio. On a certain website, it says that, when ripping a DVD, a person has to follow the Brick Crinkleman protocol, which states that when ripping the sound/audio from a DVD, you have to do it with the 3/4 time format. (http://answers.yahoo.com/question/index?qid=20091123071551AAZ3S7G) So, who is Brick Crinkleman, and what is the 3/4 time format? And how do I implement this 3/4 time format on the WinX DVD Ripper? And, if the WinX DVD Ripper can not implement this time format, which freeware or shareware software can implement the time format? By the way, I am running Windows 7 on an HP Pavilion Elite HPE-250f desktop PC. Thank you very much for any information and help.

    Read the article

  • Are there cloud network drives that let users lock files or mark them as "in use"?

    - by Brandon Craig Rhodes
    Having spent several hours reading about the features and limitations of services like DropBox and Jungle Disk and the hundreds of competitors they seem to have (as though everyone with an AWS account these days goes ahead and writes a file sharing application just for fun), I have yet to find one that would let a team of people at a small business collaborate without stepping all over each other's toes. At a small business there are often many small documents per project — estimates, contracts, project plans, budgets — and team members frequently have to open and edit them, with all sorts of problems happening if two people edit a file at once. Even if a sharing service is smart enough to keep both versions of the file created, most small-business software (like word processors, spreadsheets, estimating software, or billing systems) has no way to compare — much less to merge! — the changes in two rival versions of a file that two people edited at the same time without each other's knowledge. So, my question: are their cloud-based file sharing solutions that not only provide a virtual network drive that people can access, but that also let users lock files — even if it's not a real lock but just a flag or indicator — that could possibly prevent remote workers from both editing the same file at once? Having one person wait for another person to finish editing is a very, very small inconvenience compared to the hour or more than it can take to compare two estimates by hand until you find and resolve the rival changes. Given this fact, I am surprised that almost none of the popular file sharing solutions seem to recognize this problem and provide some solution! Does anyone know of a service that does?

    Read the article

  • Are there cloud network drives that let users lock files or mark them as "in use"?

    - by Brandon Craig Rhodes
    Having spent several hours reading about the features and limitations of services like DropBox and Jungle Disk and the hundreds of competitors they seem to have (as though everyone with an AWS account these days goes ahead and writes a file sharing application just for fun), I have yet to find one that would let a team of people at a small business collaborate without stepping all over each other's toes. At a small business there are often many small documents per project — estimates, contracts, project plans, budgets — and team members frequently have to open and edit them, with all sorts of problems happening if two people edit a file at once. Even if a sharing service is smart enough to keep both versions of the file created, most small-business software (like word processors, spreadsheets, estimating software, or billing systems) has no way to compare — much less to merge! — the changes in two rival versions of a file that two people edited at the same time without each other's knowledge. So, my question: are their cloud-based file sharing solutions that not only provide a virtual network drive that people can access, but that also let users lock files — even if it's not a real lock but just a flag or indicator — that could possibly prevent remote workers from both editing the same file at once? Having one person wait for another person to finish editing is a very, very small inconvenience compared to the hour or more than it can take to compare two estimates by hand until you find and resolve the rival changes. Given this fact, I am surprised that almost none of the popular file sharing solutions seem to recognize this problem and provide some solution! Does anyone know of a service that does?

    Read the article

  • How to diagnose remote assistance problem

    - by cantabilesoftware
    I have a long standing issue with remote assistance between a home and work PC. My wife and I both use MSN messenger and I used to be able to control her PC at home via MSN Remote Assistance. Some time ago however this stopped working and I don't know why. We're both running the latest versions of MSN Live Messenger and I've checked the appropriate firewall ports are open, but it still doesn't work and MSN just says something useless like "The person isn't responding". Any suggestions for how can I diagnose this? More info: I just tried direct Remote Desktop between work PC and home PC and it works fine - so I presume all the appropriate ports are open. Just Remote Assistance doesn't work. I'd like to get RA working so I can demonstrate how to do things remotely. With Remote Desktop the person at the other end gets booted off and can't see. With Remote Assistance they can follow along step by step. Some comments below suggest using other solutions, which is fine and do work, but there must be a way to diagnose RA and get it working. Experimenting with this some more, the notebook that I was using at work today that refused to connect works fine for remote assistance when I bring it home. So I guess this must be a problem with our network configuration at work. I've checked that 3389 is open on firewall on office router and remote desktop works both ways.... just not remote assistance. I've read that remote assitance won't work if client and server are both behind Non-UPnP/NAT routers. If one has UPnP it's supposed to work. Office router doesn't have UPnP enabled but my home one does. I've also scoured the event logs on both ends, nothing noteworthy - unless I'm looking in the wrong spot). Note (copied from comment): I've just tried ShowMyPC which is based on VNC and it works, but I'd still like to figure out what's wrong with RA - it's just bugging me. The question is only about Remote Assistance, no need to propose solutions based on other programs.[/edit by Gnoupi]

    Read the article

  • Legal IT documents

    - by TylerShads
    I have been wondering this past week because my big boss told me to start keeping track of all the things I have fixed, how to fix them, etc. Which is reasonable and have been doing anyway. But then a related question came to mind. What kind of documentation should I have on hand as far as users go. More specifically I am talking in terms of EULA, ToC, etc (correct me please if I'm using the wrong terms) Or more specifically a policy, so to speak, for the users and such. Can't say I'm a legal expert, otherwise I'd be a lawyer. The environment the users are in is pretty laid back so I don't forsee a problem. But assume that there should ever arise a problem, what should I have written up/have on hand? EDIT: I really should have noted that we are a medical transport facility and have patient records so I know that something must be done there to comply with HIPAA policies I believe. I do like what anthonysomerset said about the "If I get by a bus" Scenario and want to apply it not only to the documentation I am currently writing but also for if say an employee were to steal info from the server or edge cases, theft, etc. As far as our staff, its relatively small as in a single HR person, no legal department aside from the 2 owners' lawyers and me being the only IT person on staff with a guy who is no more than a mac superuser.

    Read the article

  • Advice: USB Monitoring Programming

    - by Kashif
    I need an advice about USB programming in linux. i have to design a USB monitoring program that 'll keep checking usb ports of a linux cent os. as soon as a usb or external hard disk is connected, this program will shoot an email to some specific person about detail of usb (as size, mount on, time). when usb is disconnected, it will again shoot an email to some person with same kind of information. mean while this program will also write logs in syslog/messages with name of programing for easy tracking. Now I want ask that what is best way to develop this program. as I'm new to this field so i know nothing about it? either i should use perl, bash scripting or some other language? I have no idea what is right way to adopt coz this program will keep running all the time to keep a check on usb ports. I know few commands in like lsusb, fdisk (to check attached usb) and df -h (to get detail of usb) but dont know how i can achieve using these commands that i am thinking. also one more thing that in future i also need to modify this program for ubuntu and Citrix XenServer and it should be same everywhere.

    Read the article

  • Proper Outlook Free/Busy status when working from home

    - by rwmnau
    Our office (pretty large - about 200 people) has recently started part-time telecommuting. It's only one day/week now, but it's already raised some questions about availability, so I wanted to see how the users here, some of whom I'm sure telecommute to a corporate job, how they set their out of office status. Outlook has four statuses, and here's what I (and most others?) take them to mean: Free: I'm available for meetings Busy: I'm in a meeting or otherwise occupied, and unavailable Tentative: Shy away from scheduling over, but I'm available if needed Out of office: I'm on vacation and unavailable. However, I don't travel for work - do people tend to use this status to mean they're remote, but available for a phone call/bridge? As we begin to telecommute, I'll be available by phone for meetings, but not in person - any meeting can have a conference bridge, but some meetings just need to be in person. I'd like to send the right message about my status - people can schedule meetings with me on my telecommute days, but they should expect me to be on a conference bridge when they do. What status do people use? Does "Out of Office" correctly reflect that you're working from home, even though I perceive this to mean that somebody is on vacation? Maybe I'm the only one confused here, but as a company that's never before done telecommuting of any kind, I'm in the dark about standard practices. Thanks for the insight! Though this isn't a technical question directly, I'm hoping it's still applicable to the group and constructive - if it's not, please close it and accept my apology.

    Read the article

  • Weird Outlook Behavior; Creating its own file folder

    - by Carol Caref
    Outlook is doing a very strange thing. It has created a folder on its own (which, whenever I completely delete, comes back, with a different name). Mail that goes into this folder will not go to any other folder unless I forward it. If I move the email or create a rule to always move mail from particular senders to the Inbox, it moves for a while, but then goes back into the created folder. The first one was called "junk" but it was in addition to my normal junk email folder. When I forwarded all the messages (some were junk, but most were not) and totally deleted that folder, a new one, called "unwanted" appeared that acted the same way. It seems that once one email goes into this folder, then any email from that person also goes into the folder. I have discussed this with the tech person at work. There is no evidence of virus or any other identifiable reason for this to happen. We have searched the Internet and not found anything like this either.

    Read the article

  • Transfer of ownership of Windows 7

    - by ziggy
    I am thinking of purchasing a copy of Windows 7 via either ebay or GumTree. I am unsure as to how the product key works. A close friend of mine is warning me against buying it from ebay as he is suggesting that once it has been used, the operating system registers itself on microsoft servers using the serial number of the motherboard of the system where it has been installed. This means once installed on one machine you wont be able to install it on another machine. Now i am struggling to believe that an operating system can only be installed on one machine. Can someone please explain exactly how this works. I can see a lot of copies being sold on Ebay which are used. I used the 'Ask a question' option and the majority of the users are saying that i should be able to use it. If someone buys Windows 7 from the shop, installs it on his PC but then decides that he wants to sell it can he not sell it? Will the person buying it not be able to use it? Does the person selling it have to somehow unregister it first? What do i need to look out for if buying it from Ebay? Thanks

    Read the article

  • Using screen to monitor non-interactive scripts (or some other solution)

    - by Michael
    I have some autonomous scripts that run commands on remote machines over ssh. These scripts rely on getting stdout, stderr, and the return code of each command run. I want to be able to monitor the progress of the scripts on each target machine so that I can see if something has hung and possibly intervene if necessary. My initial idea was to have the scripts run commands in a screen session, so that the person monitoring could simply attach to the session with screen -x. However, it was hard to do that from a script since screen is an interactive program. I can send a command to the screen session with screen -S session -X stuff "command^M", but then I don't get the output and return code that I need back. My second idea was to put script /path/to/log in ~/.bash_profile and log the entire session to a file. Then the monitoring person could simply tail the log file. However, this doesn't provide the interactivity that I was looking for. Any ideas on how to solve this problem?

    Read the article

  • Where can someone store >100GB of pictures online? [closed]

    - by sbi
    A person who is not very computer-savvy needs to store 130GB of photos. The key parameters are: an non-negligible probability that the company selling the storage will be existing, and the data accessible, for at least five years data should be considered safe once uploaded reasonable terms of service: google drive reserving the right to literally do anything they want with their user's data is not acceptable; the possibility that the CIA might look at those pictures is not considered a threat easy to use from Windows, preferably as a drive no nerve-wracking limitations ("cannot upload 10GB/day" or "files 500MB" etc.) that serve no purpose other than pushing the user to the next-higher price plan some upgrade plan: there's currently 10-30GB of new photos per year, with a tendency to increase, which might bust a 150GB limit next January ability to somehow sort the pictures: currently they are sorted into folders, but something alike (tags) would be just as good, if easy enough to apply of course, the pricing is important (although there's a reason this is the last bullet; reasonable data safety is considered more important) Nice to have, but not necessary features would be: additional features related to photos (thumbnail generation, album sharing etc.) access from web and other platforms than Windows (smart phones) Let me stress this again: The person in need of that is able to copy pictures from the camera to the computer, can copy files in the explorer, and uses a web email service. That's about it, there's almost no understanding of what happens under the hood.

    Read the article

  • Can my employer force me to backup my personal machine? [closed]

    - by Eric B
    Here's the background: Approximately 1.25 years ago, the company I work for was acquired by a larger 400 person company. Before acquisition (and today still) we are all remote employees using our own personal hardware for work-related duties (coding, email, etc). We are approximately 15 employees within the larger organization. Some time after acquisition, the now owning company was slapped with a civil lawsuit. Part of this lawsuit (discovery) is requiring them to retrieve & store from us any related information. Because we were a separate company up until acquisition, there is a high probability that our personal machines might contain information about what the lawsuit alleges (email, documents, chat logs?, etc). Obviously, this depends largely on the person's job function (engineer vs. customer support vs. CEO). All employees are being required to comply. Since acquisition (1.25 yrs), the new company has not provided us with company laptops/desktops. We continue to use personal hardware, licenses, etc for work. Email is via POP3s and not hanging around on the mail server - it's on everyone's client. Documents are spread across personal machines. So, now they want us each to backup our complete personal machines. They are allowing us to create a "personal" folder where we can place personal documents. That single folder will be excluded from backup. Of course, that means total re-arrangement of documents, etc. For most of us, 99% of the data on the machine is NOT related to work. So, what's the consensus? Should we comply? What is their recourse if we do not?

    Read the article

  • How do you get linux to honor setuid directories?

    - by Takigama
    Some time ago while in a conversation in IRC, one user in a channel I was in suggested someone setuid a directory in order for it to inherit the userid on files to solve a problem someone else was having. At the time I spoke up and said "linux doesn't support setuid directories". After that, the person giving the advice showed me a pastebin (http://codepad.org/4In62f13) of his system honouring the setuid permission set on a directory. Just to explain, when i say "linux doesnt support setuid directories" what I mean is that you can go "chmod u+s directory" and it will set the bit on the directory. However, linux (as i understood it) ignores this bit (on directories). Try as I might, I just cant quite replicate that pastebin. Someone suggested to me once that it might be possible to emulate the behaviour with selinux - and playing around with rules, its possible to force a uid on a file, but not from a setuid directory permission (that I can see). Reading around on the internet has been fairly uninformative - most places claim "no, setuid on directories does not work with linux" with the occasional "it can be done under specific circumstances" (such as this: http://arstechnica.com/etc/linux/2003/linux.ars-12032003.html) I dont remember who the original person was, but the original system was a debian 6 system, and the filesystem it was running was xfs mounted with "default,acl". I've tried replicating that, but no luck so far (tried so far with various versions of debian, ubuntu, fedora and centos) Can anyone clue me in on what or how you get a system to honor setuid on a directory?

    Read the article

  • From Binary to Data Structures

    - by Cédric Menzi
    Table of Contents Introduction PE file format and COFF header COFF file header BaseCoffReader Byte4ByteCoffReader UnsafeCoffReader ManagedCoffReader Conclusion History This article is also available on CodeProject Introduction Sometimes, you want to parse well-formed binary data and bring it into your objects to do some dirty stuff with it. In the Windows world most data structures are stored in special binary format. Either we call a WinApi function or we want to read from special files like images, spool files, executables or may be the previously announced Outlook Personal Folders File. Most specifications for these files can be found on the MSDN Libarary: Open Specification In my example, we are going to get the COFF (Common Object File Format) file header from a PE (Portable Executable). The exact specification can be found here: PECOFF PE file format and COFF header Before we start we need to know how this file is formatted. The following figure shows an overview of the Microsoft PE executable format. Source: Microsoft Our goal is to get the PE header. As we can see, the image starts with a MS-DOS 2.0 header with is not important for us. From the documentation we can read "...After the MS DOS stub, at the file offset specified at offset 0x3c, is a 4-byte...". With this information we know our reader has to jump to location 0x3c and read the offset to the signature. The signature is always 4 bytes that ensures that the image is a PE file. The signature is: PE\0\0. To prove this we first seek to the offset 0x3c, read if the file consist the signature. So we need to declare some constants, because we do not want magic numbers.   private const int PeSignatureOffsetLocation = 0x3c; private const int PeSignatureSize = 4; private const string PeSignatureContent = "PE";   Then a method for moving the reader to the correct location to read the offset of signature. With this method we always move the underlining Stream of the BinaryReader to the start location of the PE signature.   private void SeekToPeSignature(BinaryReader br) { // seek to the offset for the PE signagure br.BaseStream.Seek(PeSignatureOffsetLocation, SeekOrigin.Begin); // read the offset int offsetToPeSig = br.ReadInt32(); // seek to the start of the PE signature br.BaseStream.Seek(offsetToPeSig, SeekOrigin.Begin); }   Now, we can check if it is a valid PE image by reading of the next 4 byte contains the content PE.   private bool IsValidPeSignature(BinaryReader br) { // read 4 bytes to get the PE signature byte[] peSigBytes = br.ReadBytes(PeSignatureSize); // convert it to a string and trim \0 at the end of the content string peContent = Encoding.Default.GetString(peSigBytes).TrimEnd('\0'); // check if PE is in the content return peContent.Equals(PeSignatureContent); }   With this basic functionality we have a good base reader class to try the different methods of parsing the COFF file header. COFF file header The COFF header has the following structure: Offset Size Field 0 2 Machine 2 2 NumberOfSections 4 4 TimeDateStamp 8 4 PointerToSymbolTable 12 4 NumberOfSymbols 16 2 SizeOfOptionalHeader 18 2 Characteristics If we translate this table to code, we get something like this:   [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] public struct CoffHeader { public MachineType Machine; public ushort NumberOfSections; public uint TimeDateStamp; public uint PointerToSymbolTable; public uint NumberOfSymbols; public ushort SizeOfOptionalHeader; public Characteristic Characteristics; } BaseCoffReader All readers do the same thing, so we go to the patterns library in our head and see that Strategy pattern or Template method pattern is sticked out in the bookshelf. I have decided to take the template method pattern in this case, because the Parse() should handle the IO for all implementations and the concrete parsing should done in its derived classes.   public CoffHeader Parse() { using (var br = new BinaryReader(File.Open(_fileName, FileMode.Open, FileAccess.Read, FileShare.Read))) { SeekToPeSignature(br); if (!IsValidPeSignature(br)) { throw new BadImageFormatException(); } return ParseInternal(br); } } protected abstract CoffHeader ParseInternal(BinaryReader br);   First we open the BinaryReader, seek to the PE signature then we check if it contains a valid PE signature and rest is done by the derived implementations. Byte4ByteCoffReader The first solution is using the BinaryReader. It is the general way to get the data. We only need to know which order, which data-type and its size. If we read byte for byte we could comment out the first line in the CoffHeader structure, because we have control about the order of the member assignment.   protected override CoffHeader ParseInternal(BinaryReader br) { CoffHeader coff = new CoffHeader(); coff.Machine = (MachineType)br.ReadInt16(); coff.NumberOfSections = (ushort)br.ReadInt16(); coff.TimeDateStamp = br.ReadUInt32(); coff.PointerToSymbolTable = br.ReadUInt32(); coff.NumberOfSymbols = br.ReadUInt32(); coff.SizeOfOptionalHeader = (ushort)br.ReadInt16(); coff.Characteristics = (Characteristic)br.ReadInt16(); return coff; }   If the structure is as short as the COFF header here and the specification will never changed, there is probably no reason to change the strategy. But if a data-type will be changed, a new member will be added or ordering of member will be changed the maintenance costs of this method are very high. UnsafeCoffReader Another way to bring the data into this structure is using a "magically" unsafe trick. As above, we know the layout and order of the data structure. Now, we need the StructLayout attribute, because we have to ensure that the .NET Runtime allocates the structure in the same order as it is specified in the source code. We also need to enable "Allow unsafe code (/unsafe)" in the project's build properties. Then we need to add the following constructor to the CoffHeader structure.   [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] public struct CoffHeader { public CoffHeader(byte[] data) { unsafe { fixed (byte* packet = &data[0]) { this = *(CoffHeader*)packet; } } } }   The "magic" trick is in the statement: this = *(CoffHeader*)packet;. What happens here? We have a fixed size of data somewhere in the memory and because a struct in C# is a value-type, the assignment operator = copies the whole data of the structure and not only the reference. To fill the structure with data, we need to pass the data as bytes into the CoffHeader structure. This can be achieved by reading the exact size of the structure from the PE file.   protected override CoffHeader ParseInternal(BinaryReader br) { return new CoffHeader(br.ReadBytes(Marshal.SizeOf(typeof(CoffHeader)))); }   This solution is the fastest way to parse the data and bring it into the structure, but it is unsafe and it could introduce some security and stability risks. ManagedCoffReader In this solution we are using the same approach of the structure assignment as above. But we need to replace the unsafe part in the constructor with the following managed part:   [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] public struct CoffHeader { public CoffHeader(byte[] data) { IntPtr coffPtr = IntPtr.Zero; try { int size = Marshal.SizeOf(typeof(CoffHeader)); coffPtr = Marshal.AllocHGlobal(size); Marshal.Copy(data, 0, coffPtr, size); this = (CoffHeader)Marshal.PtrToStructure(coffPtr, typeof(CoffHeader)); } finally { Marshal.FreeHGlobal(coffPtr); } } }     Conclusion We saw that we can parse well-formed binary data to our data structures using different approaches. The first is probably the clearest way, because we know each member and its size and ordering and we have control about the reading the data for each member. But if add member or the structure is going change by some reason, we need to change the reader. The two other solutions use the approach of the structure assignment. In the unsafe implementation we need to compile the project with the /unsafe option. We increase the performance, but we get some security risks.

    Read the article

  • Parallelism in .NET – Part 5, Partitioning of Work

    - by Reed
    When parallelizing any routine, we start by decomposing the problem.  Once the problem is understood, we need to break our work into separate tasks, so each task can be run on a different processing element.  This process is called partitioning. Partitioning our tasks is a challenging feat.  There are opposing forces at work here: too many partitions adds overhead, too few partitions leaves processors idle.  Trying to work the perfect balance between the two extremes is the goal for which we should aim.  Luckily, the Task Parallel Library automatically handles much of this process.  However, there are situations where the default partitioning may not be appropriate, and knowledge of our routines may allow us to guide the framework to making better decisions. First off, I’d like to say that this is a more advanced topic.  It is perfectly acceptable to use the parallel constructs in the framework without considering the partitioning taking place.  The default behavior in the Task Parallel Library is very well-behaved, even for unusual work loads, and should rarely be adjusted.  I have found few situations where the default partitioning behavior in the TPL is not as good or better than my own hand-written partitioning routines, and recommend using the defaults unless there is a strong, measured, and profiled reason to avoid using them.  However, understanding partitioning, and how the TPL partitions your data, helps in understanding the proper usage of the TPL. I indirectly mentioned partitioning while discussing aggregation.  Typically, our systems will have a limited number of Processing Elements (PE), which is the terminology used for hardware capable of processing a stream of instructions.  For example, in a standard Intel i7 system, there are four processor cores, each of which has two potential hardware threads due to Hyperthreading.  This gives us a total of 8 PEs – theoretically, we can have up to eight operations occurring concurrently within our system. In order to fully exploit this power, we need to partition our work into Tasks.  A task is a simple set of instructions that can be run on a PE.  Ideally, we want to have at least one task per PE in the system, since fewer tasks means that some of our processing power will be sitting idle.  A naive implementation would be to just take our data, and partition it with one element in our collection being treated as one task.  When we loop through our collection in parallel, using this approach, we’d just process one item at a time, then reuse that thread to process the next, etc.  There’s a flaw in this approach, however.  It will tend to be slower than necessary, often slower than processing the data serially. The problem is that there is overhead associated with each task.  When we take a simple foreach loop body and implement it using the TPL, we add overhead.  First, we change the body from a simple statement to a delegate, which must be invoked.  In order to invoke the delegate on a separate thread, the delegate gets added to the ThreadPool’s current work queue, and the ThreadPool must pull this off the queue, assign it to a free thread, then execute it.  If our collection had one million elements, the overhead of trying to spawn one million tasks would destroy our performance. The answer, here, is to partition our collection into groups, and have each group of elements treated as a single task.  By adding a partitioning step, we can break our total work into small enough tasks to keep our processors busy, but large enough tasks to avoid overburdening the ThreadPool.  There are two clear, opposing goals here: Always try to keep each processor working, but also try to keep the individual partitions as large as possible. When using Parallel.For, the partitioning is always handled automatically.  At first, partitioning here seems simple.  A naive implementation would merely split the total element count up by the number of PEs in the system, and assign a chunk of data to each processor.  Many hand-written partitioning schemes work in this exactly manner.  This perfectly balanced, static partitioning scheme works very well if the amount of work is constant for each element.  However, this is rarely the case.  Often, the length of time required to process an element grows as we progress through the collection, especially if we’re doing numerical computations.  In this case, the first PEs will finish early, and sit idle waiting on the last chunks to finish.  Sometimes, work can decrease as we progress, since previous computations may be used to speed up later computations.  In this situation, the first chunks will be working far longer than the last chunks.  In order to balance the workload, many implementations create many small chunks, and reuse threads.  This adds overhead, but does provide better load balancing, which in turn improves performance. The Task Parallel Library handles this more elaborately.  Chunks are determined at runtime, and start small.  They grow slowly over time, getting larger and larger.  This tends to lead to a near optimum load balancing, even in odd cases such as increasing or decreasing workloads.  Parallel.ForEach is a bit more complicated, however. When working with a generic IEnumerable<T>, the number of items required for processing is not known in advance, and must be discovered at runtime.  In addition, since we don’t have direct access to each element, the scheduler must enumerate the collection to process it.  Since IEnumerable<T> is not thread safe, it must lock on elements as it enumerates, create temporary collections for each chunk to process, and schedule this out.  By default, it uses a partitioning method similar to the one described above.  We can see this directly by looking at the Visual Partitioning sample shipped by the Task Parallel Library team, and available as part of the Samples for Parallel Programming.  When we run the sample, with four cores and the default, Load Balancing partitioning scheme, we see this: The colored bands represent each processing core.  You can see that, when we started (at the top), we begin with very small bands of color.  As the routine progresses through the Parallel.ForEach, the chunks get larger and larger (seen by larger and larger stripes). Most of the time, this is fantastic behavior, and most likely will out perform any custom written partitioning.  However, if your routine is not scaling well, it may be due to a failure in the default partitioning to handle your specific case.  With prior knowledge about your work, it may be possible to partition data more meaningfully than the default Partitioner. There is the option to use an overload of Parallel.ForEach which takes a Partitioner<T> instance.  The Partitioner<T> class is an abstract class which allows for both static and dynamic partitioning.  By overriding Partitioner<T>.SupportsDynamicPartitions, you can specify whether a dynamic approach is available.  If not, your custom Partitioner<T> subclass would override GetPartitions(int), which returns a list of IEnumerator<T> instances.  These are then used by the Parallel class to split work up amongst processors.  When dynamic partitioning is available, GetDynamicPartitions() is used, which returns an IEnumerable<T> for each partition.  If you do decide to implement your own Partitioner<T>, keep in mind the goals and tradeoffs of different partitioning strategies, and design appropriately. The Samples for Parallel Programming project includes a ChunkPartitioner class in the ParallelExtensionsExtras project.  This provides example code for implementing your own, custom allocation strategies, including a static allocator of a given chunk size.  Although implementing your own Partitioner<T> is possible, as I mentioned above, this is rarely required or useful in practice.  The default behavior of the TPL is very good, often better than any hand written partitioning strategy.

    Read the article

  • CodePlex Daily Summary for Monday, June 14, 2010

    CodePlex Daily Summary for Monday, June 14, 2010New ProjectsBD File Hash: BD File Hash is a convenient file hash and hash compare tool for Windows which currently works with MD5, SHA-1, and SHA-256 algorithms. FileScan: This is an application that searches through a drive or directory structure for files matching a filter. This project was converted from VB to ...genesis9: genesis9HeinanOS: HeinanOS is an operating system developed mainly in C++. HeinanOS is a light OS (1.44 MB image) with a lot of capabilites and many more are being ...MediaBrowserWS - Creates a Web Service for the popular MediaBrowser plugin: Creates a web service in Media Center for accessing your MediaBrowser collection. Allows for external devices (Tablets/phones/laptops) to access a ...MME: New Edition of Managed Menu Extensions for Visual Studio 2010 The Main goal of "MME" is to provide easy access to adding Right Click menus in the ...MVMMapper: Generate the ViewModel and its mapping to the Model when implementing MVVM in .NET. Developed using T4 templates. Current version supports Silver...ProjectArDotNet: Si te agarro te parto! Si te agarro te emperno no me importa que seas menor de edad!Scriptagility for DotNetNuke: Scriptagility is a DotNetNuke module for Javascript developers. This module provides dynamic client scripting infrastructure for developing javascr...simpleLinux Distro: SimpleLinux. is a Linux distributions that is easy to use. Simple Linux website: http://simplelinux.tkTag Cloud Control for asp.net: Tag Cloud Control for asp.net allows the user to display the most important keywords to display in tag cloud. Each Tag has it own navigation url to...thefreeimdb: fsadie qwUppityUp: UppityUp is a simple and light-weight tray application which monitors a remote server and shows a notification when it comes online. This is usefu...Vivid3D 2 - DirectX 10 3D ToolKit: The sequel to my first ever engine wrote several years ago. It is not based on it in anyway. VSIDev: VSI DevXTQXK_WORK: Actionscript 3.0东坡博客: 这是一个ASP。net mvc 2博客。New Releases.NET Extensions - Extension Methods Library: Release 2010.08: Added extension methods for Bitmap manipulation (scaling for now): - Bitmap.ScaleToSize() - Bitmap.ScaleToSizeProportional() - Bitmap.ScaleProport...Black Falcon Software's Database Data-Access-Layers: “SQLHELPER”, “ORAHELPER” - Handling Binary Data: See attached document...BTech Networking Library: BTech Networking Library: Same as pervious just new namespace, extended networking coming soon!!!Community Forums NNTP bridge: Community Forums NNTP Bridge V37: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...Generic Entity Model 2: GEM2 build 54383: This is second BETA release of GEM2! Please see source code change sets for updates! Following implementation is not included in this release: My...Hades: Projet Hadès - Official Demo - Version 0.1.0 Beta: ---------------------------------------------------------------------------- - Projet Hadès - Official Demo - Version 0.1.0 Beta ------------------...HeinanOS: HeinanOS M1 Source Code: You can download HeinanOS M1 Source Code and contribute to HeinanOS development! Be aware that you should not use this code for your own systems! ...HeinanOS: Milestone 1: This is the first major release for HeinanOS 1.0 Please note this is a PRE-RELEASE! This release includes the following features: -Bootable DOS-...HKGolden Express: HKGoldenExpress (Build 201006131900): New features: (None) Bug fix: Incorrect message submit date of message/ replies. (Note: Showing message submit date is enabled since Build 20100...HKGolden Express: HKGoldenExpress (Build 201006140110): New features: (None) Bug fix: (None) Improvements: (None) Other changes: Set time zone of message date as Hong Kong. Adjusted the format of messa...MediaCoder.NET: MediaCoder.NET v1.0 Beta 1.5: Installer file for MediaCoder.NET v1.0 beta 1.5. Now converts multiple files.MME: First release: Features of this release 1. One installer MME.msi. However you can also install MMEMenuManagerSetup.vsix which installs a project template that e...MSBuild Launch Pad (mPad): 1.1 Beta 1: Platform selection box is added.MVMMapper: MVMMapper Release v 1.0.1: This release has no downloadable documentation. Please use the Documentation section to get started.NginxTray: NginxTray 0.7 RC2: NginxTray 0.7 RC2PowerAuras: PowerAuras-3.0.0K-beta3: New Auras: Item Name Equipment Slot Tracking Changes from beta1 5 new aura textures Fixed Tracking bug Added graphical equipment slot sele...PowerAuras: PowerAuras-3.0.0K-beta4: New Auras: Item Name Equipment Slot Tracking Changes from beta1 5 new aura textures Fixed Tracking bug Added graphical equipment slot sele...Scriptagility for DotNetNuke: Scriptagility 1.0 (Beta): Initial public release please evaluate and feedbackSharpDevelop: SharpDevelop 4.0 Beta 1: Release notes: http://community.sharpdevelop.net/forums/t/11388.aspxsimpleLinux Distro: Project X3: This is an example of download for simpleLinuxSOAPI - StackOverflow API Parser/Wrapper Generator: SOAPI Beta 3: The SOAPI Beta 3 download will be made availabe later today when the initial documentation is complete. The previously available Beta 1 download h...Sofa: Initial release V1.0: This is the first release of Sofa. As it is made of code being previously used, as we tested it is a stable release. But bugs are always possible,...Tag Cloud Control for asp.net: Tag Cloud Control for asp.net: Tag Cloud Control for asp.net allows the user to display the most important keywords to display in tag cloud. Each Tag has it own navigation url to...UppityUp: UppityUp v0.1: First functional version, supports monitoring availability by ping (ICMP) requests. Fit for general use. Consists of one standalone .exe file - no...VCC: Latest build, v2.1.30613.0: Automatic drop of latest buildWindStyle ExifInfo for Windows Live Writer: 1.1.0.0: Add: Multiple Language(English and Simplified Chinese); Add: Insert multiple files; Fix: Error when insert pictures without Exif info; Update: Icon...Work Recorder - Hold on own time!: WorkRecorder 1.2: +Add a whole day chartXsltDb - DotNetNuke Module Builder: 01.01.24: Syntax highlighting delivered!New samples for RadControls. On single page you can find RadTreeView, RadRating, RadChart, RadFormDecorator, RadEdito...xUnit.net Contrib: xunitcontrib 0.4 (ReSharper 5.0 RTM + dotCover): xunitcontrib release 0.4 (ReSharper runner) This release provides a test runner plugin for Resharper 5.0, 4.5 and 4.1, targetting all versions of x...Most Popular ProjectsCommunity Forums NNTP bridgeRIA Services EssentialsNeatUploadBxf (Basic XAML Framework)Agile Personal Development Methodology.NET Transactional File ManagerSOLID by exampleASP.NET MVC Time PlannerWEI ShareSiverlight ProjectMost Active ProjectsjQuery Library for SharePoint Web Servicespatterns & practices – Enterprise LibraryNB_Store - Free DotNetNuke Ecommerce Catalog ModuleRhyduino - Arduino and Managed CodeCommunity Forums NNTP bridgeCassandraemonBlogEngine.NETLightweight Fluent WorkflowMediaCoder.NETAndrew's XNA Helpers

    Read the article

  • Is Berkeley DB a NoSQL solution?

    - by Gregory Burd
    Berkeley DB is a library. To use it to store data you must link the library into your application. You can use most programming languages to access the API, the calls across these APIs generally mimic the Berkeley DB C-API which makes perfect sense because Berkeley DB is written in C. The inspiration for Berkeley DB was the DBM library, a part of the earliest versions of UNIX written by AT&T's Ken Thompson in 1979. DBM was a simple key/value hashtable-based storage library. In the early 1990s as BSD UNIX was transitioning from version 4.3 to 4.4 and retrofitting commercial code owned by AT&T with unencumbered code, it was the future founders of Sleepycat Software who wrote libdb (aka Berkeley DB) as the replacement for DBM. The problem it addressed was fast, reliable local key/value storage. At that time databases almost always lived on a single node, even the most sophisticated databases only had simple fail-over two node solutions. If you had a lot of data to store you would choose between the few commercial RDBMS solutions or to write your own custom solution. Berkeley DB took the headache out of the custom approach. These basic market forces inspired other DBM implementations. There was the "New DBM" (ndbm) and the "GNU DBM" (GDBM) and a few others, but the theme was the same. Even today TokyoCabinet calls itself "a modern implementation of DBM" mimicking, and improving on, something first created over thirty years ago. In the mid-1990s, DBM was the name for what you needed if you were looking for fast, reliable local storage. Fast forward to today. What's changed? Systems are connected over fast, very reliable networks. Disks are cheep, fast, and capable of storing huge amounts of data. CPUs continued to follow Moore's Law, processing power that filled a room in 1990 now fits in your pocket. PCs, servers, and other computers proliferated both in business and the personal markets. In addition to the new hardware entire markets, social systems, and new modes of interpersonal communication moved onto the web and started evolving rapidly. These changes cause a massive explosion of data and a need to analyze and understand that data. Taken together this resulted in an entirely different landscape for database storage, new solutions were needed. A number of novel solutions stepped up and eventually a category called NoSQL emerged. The new market forces inspired the CAP theorem and the heated debate of BASE vs. ACID. But in essence this was simply the market looking at what to trade off to meet these new demands. These new database systems shared many qualities in common. There were designed to address massive amounts of data, millions of requests per second, and scale out across multiple systems. The first large-scale and successful solution was Dynamo, Amazon's distributed key/value database. Dynamo essentially took the next logical step and added a twist. Dynamo was to be the database of record, it would be distributed, data would be partitioned across many nodes, and it would tolerate failure by avoiding single points of failure. Amazon did this because they recognized that the majority of the dynamic content they provided to customers visiting their web store front didn't require the services of an RDBMS. The queries were simple, key/value look-ups or simple range queries with only a few queries that required more complex joins. They set about to use relational technology only in places where it was the best solution for the task, places like accounting and order fulfillment, but not in the myriad of other situations. The success of Dynamo, and it's design, inspired the next generation of Non-SQL, distributed database solutions including Cassandra, Riak and Voldemort. The problem their designers set out to solve was, "reliability at massive scale" so the first focal point was distributed database algorithms. Underneath Dynamo there is a local transactional database; either Berkeley DB, Berkeley DB Java Edition, MySQL or an in-memory key/value data structure. Dynamo was an evolution of local key/value storage onto networks. Cassandra, Riak, and Voldemort all faced similar design decisions and one, Voldemort, choose Berkeley DB Java Edition for it's node-local storage. Riak at first was entirely in-memory, but has recently added write-once, append-only log-based on-disk storage similar type of storage as Berkeley DB except that it is based on a hash table which must reside entirely in-memory rather than a btree which can live in-memory or on disk. Berkeley DB evolved too, we added high availability (HA) and a replication manager that makes it easy to setup replica groups. Berkeley DB's replication doesn't partitioned the data, every node keeps an entire copy of the database. For consistency, there is a single node where writes are committed first - a master - then those changes are delivered to the replica nodes as log records. Applications can choose to wait until all nodes are consistent, or fire and forget allowing Berkeley DB to eventually become consistent. Berkeley DB's HA scales-out quite well for read-intensive applications and also effectively eliminates the central point of failure by allowing replica nodes to be elected (using a PAXOS algorithm) to mastership if the master should fail. This implementation covers a wide variety of use cases. MemcacheDB is a server that implements the Memcache network protocol but uses Berkeley DB for storage and HA to replicate the cache state across all the nodes in the cache group. Google Accounts, the user authentication layer for all Google properties, was until recently running Berkeley DB HA. That scaled to a globally distributed system. That said, most NoSQL solutions try to partition (shard) data across nodes in the replication group and some allow writes as well as reads at any node, Berkeley DB HA does not. So, is Berkeley DB a "NoSQL" solution? Not really, but it certainly is a component of many of the existing NoSQL solutions out there. Forgetting all the noise about how NoSQL solutions are complex distributed databases when you boil them down to a single node you still have to store the data to some form of stable local storage. DBMs solved that problem a long time ago. NoSQL has more to do with the layers on top of the DBM; the distributed, sometimes-consistent, partitioned, scale-out storage that manage key/value or document sets and generally have some form of simple HTTP/REST-style network API. Does Berkeley DB do that? Not really. Is Berkeley DB a "NoSQL" solution today? Nope, but it's the most robust solution on which to build such a system. Re-inventing the node-local data storage isn't easy. A lot of people are starting to come to appreciate the sophisticated features found in Berkeley DB, even mimic them in some cases. Could Berkeley DB grow into a NoSQL solution? Absolutely. Our key/value API could be extended over the net using any of a number of existing network protocols such as memcache or HTTP/REST. We could adapt our node-local data partitioning out over replicated nodes. We even have a nice query language and cost-based query optimizer in our BDB XML product that we could reuse were we to build out a document-based NoSQL-style product. XML and JSON are not so different that we couldn't adapt one to work with the other interchangeably. Without too much effort we could add what's missing, we could jump into this No SQL market withing a single product development cycle. Why isn't Berkeley DB already a NoSQL solution? Why aren't we working on it? Why indeed...

    Read the article

  • Flashing your Windows Phone Dummies

    - by Martin Hinshelwood
    The rate at which vendors release new updates for the HD2 is ridiculously slow. You have to wait for Microsoft to release the new OS, then you wait for HTC to build it into a ROM, and then you have to wait up to 6 months for your operator to badly customise it for their network. Once Windows Phone 7 is released this problem should go away as Microsoft is likely to be able to update the phone over the air, but what do we do until then? I want Windows Mobile 6.5.5 now!   I’m an early adopter. If there is a new version of something then that’s the version I want. As long as you accept that you are using something on a “let the early adopter beware” and accept that there may be bugs, sometimes serious crippling bugs the go for it. Note that I won't be responsible if you end up bricking your phone, unlocking or flashing your radio or ROM can be risky. If you follow the instructions then you should be fine, I've flashed my phones (SPV, M300, M1000, M2000, M3100, TyTN, TyTN 2, HD2) hundreds of times without any problems! I have been using Windows Mobile 6.5.5 before it was called 6.5.5 and for long enough that I don’t even remember when I first started using it. I was using it on my HTC TyTN 2 before I got an HD2 a couple of months before Christmas, and the first custom ROM’s for the HD2 were a couple of months after that. I always update to the latest ROM that I like, and occasionally I go back to the stock ROM’s to have a look see, but I am always disappointed. Terms: Soft Reset: Same as pulling out the battery, but is like a reboot for your phone Hard Reset: Reinstalls the Operating system from the Image that is stored on it ROM: This is Image that is loaded onto your phone and it is used to reinstall your phone whenever you do a “hard reset”. Stock ROM: A ROM from the original vendor… So HTC Cook a ROM: Referring to Cooking a ROM is the process a ROM developer goes through to take all of the parts (OS, Drivers and Applications) that make up a running phone and compiling them into a ROM. ROM Kitchen: A place where you get an SDK and all the component parts of the phone: OD, Drivers and Application. There are usually lots of Tools for making it easier to compile and build the image. Flashing: The process of updating one of the layers of your phone with a new layer Bricked: This is what happens when flashing goes wrong. Your phone is now good for only one thing… stopping paper blowing away in a windy place. You can “cook” you own ROM using one of the many good “ROM Kitchens” or you can use a ROM built and tested by someone else. I have cooked my own ROM before, and while the tutorials are good, it is a lot of hassle. You can only Flash new ROM’s that are specifically for your phone only so find a ROM for your phone and XDA Developers is the best place to look. It has a forum based structure and you can find your phone quite easily. XDA Developer Forum Installing a new ROM does have its risks. In the past there have been stories about phones being “bricked” but I have not heard of a bricked phone for quite some years. if you follow the instructions carefully you should not have any problems. note: Most of the tools are written by people for whom English is not their first language to you will need concentrate hard to understand some of the instructions. Have you ever read a manual that was just literally translated from another language? Enough said… There are a number of layers on your phone that you will need to know about: SPL: This is the lowest level, like a BIOS on a PC and is the Operating Systems gateway to the hardware Radio: I think of this as the hardware drivers, and you will need a different Radio for CDMA than GSM networks ROM: This is like your Windows CD, but it is stored internally to the Phone. Flashing your phone consists of replacing one Image with another and then wiping your phone and automatically reinstall from the Image. Sometimes when you download an Image wither it is for a Radio or for ROM you only get a file called *.nbh. What do you do with this? Well you need an RUU application to push that Image to your phone. The RUU’s are different per phone, but there is a CustomRUU for the HD2 that will update your phone with any *.nbh placed in the same directory. Download and Instructions for CustomRUU #1 Flash HardSPL An SPL is kind of like a BIOS, and the default one has checks to make sure that you are only installing a signed ROM. This would prevent you from installing one that comes from any other source but the vendor. NOTE: Installing a HARD SPL invalidates your warranty so remember to Flash your phone with a “stock” vendor ROM before trying to send your phone in for repairs. Is the warranty reinstated when you go back to a stock ROM? I don’t know… Updating your SPL to a HardSPL effectively unlocks your phone so you can install anything you like. I would recommend the HardSPL2. Download and Instructions for HardSPL2 #2 Task29 One of the problems that has been seen on the HD2 when flashing new ROM’s is that things are left over from the old ROM. For a while the recommendation was to Flash a stock ROM first, but some clever cookies have come up with “Task29” which formats your phone first. After running this your phone will be blank and will only boot to the white HTC logo and no further. You should follow the instructions and reboot (remove battery) and hold down the “volume down” button while turning you HD2 on to enter the bootloader. From here you can run CustomRUU once the USB message appears. Download and Instructions for Task29 #2 Flash Radio You may need to play around with this one, there is no good and bad version and the latest is not always the best. You know that annoying thing when you hit “end call” on your phone and nothing happens? Well that's down to the Radio. Get this version right for you and you may even be able to make calls. From a Windows Mobile as well Download There are no instructions here, but they are the same as th ROM, but you use this *.nbh file. #3 Flash ROM If you have gotten this far then you are probably a pro by now Just download the latest ROM below and Flash to your phone. I have been really impressed by the Artemis line of ROM’s but it is no way the only choice. I like this one as the developer builds them as close to the stock ROM as possible while updating to the latest of everything. Download and Instructions for  Artemis HD2 vXX Conclusion While updating your ROM is not for the faint hearted it provides more options than the Stock ROM’s and quicker feature updates than waiting… Technorati Tags: WM6

    Read the article

  • How to Organize a Programming Language Club

    - by Ben Griswold
    I previously noted that we started a language club at work.  You know, I searched around but I couldn’t find a copy of the How to Organize a Programming Language Club Handbook. Maybe it’s sold out?  Yes, Stack Overflow has quite a bit of information on how to learn and teach new languages and there’s also a good number of online tutorials which provide language introductions but I was interested in group learning.  After   two months of meetings, I present to you the Unofficial How to Organize a Programming Language Club Handbook.  1. Gauge interest. Start by surveying prospects. “Excuse me, smart-developer-whom-I-work-with-and-I-think-might-be-interested-in-learning-a-new-coding-language-with-me. Are you interested in learning a new language with me?” If you’re lucky, you work with a bunch of really smart folks who aren’t shy about teaching/learning in a group setting and you’ll have a collective interest in no time.  Simply suggesting the idea is the only effort required.  If you don’t work in this type of environment, maybe you should consider a new place of employment.  2. Make it official. Send out a “Welcome to the Club” email: There’s been talk of folks itching to learn new languages – Python, Scala, F# and Haskell to name a few.  Rather than taking on new languages alone, let’s learn in the open.  That’s right.  Let’s start a languages club.  We’ll have everything a real club needs – secret handshake, goofy motto and a high-and-mighty sense that we’re better than everybody else. T-shirts?  Hell YES!  Anyway, I’ve thrown this idea around the office and no one has laughed at me yet so please consider this your very official invitation to be in THE club. [Insert your ideas about how the club might be run, solicit feedback and suggestions, ask what other folks would like to get out the club, comment about club hazing practices and talk up the T-shirts even more. Finally, call out the languages you are interested in learning and ask the group for their list.] 3.  Send out invitations to the first meeting.  Don’t skimp!  Hallmark greeting cards for everyone.  Personalized.  Hearts over the I’s and everything.  Oh, and be sure to include the list of suggested languages with vote count.  Here the list of languages we are interested in: Python 5 Ruby 4 Objective-C 3 F# 2 Haskell 2 Scala 2 Ada 1 Boo 1 C# 1 Clojure 1 Erlang 1 Go 1 Pi 1 Prolog 1 Qt 1 4.  At the first meeting, there must be cake.  Lots of cake. And you should tackle some very important questions: Which language should we start with?  You can immediately go with the top vote getter or you could do as we did and designate each person to provide a high-level review of each of the proposed languages over the next two weeks.  After all presentations are completed, vote on the language. Our high-level review consisted of answers to a series of questions. Decide how often and where the group will meet.  We, for example, meet for a brown bag lunch every Wednesday.  Decide how you’re going to learn.  We determined that the best way to learn is to just dive in and write code.  After choosing our first language (Python), we talked about building an application, or performing coding katas, but we ultimately choose to complete a series of Project Euler problems.  We kept it simple – each member works out the same two problems each week in preparation of a code review the following Wednesday. 5.  Code, Review, Learn.  Prior to the weekly meeting, everyone uploads their solutions to our internal wiki.  Each Project Euler problem has a dedicated page.  In the meeting, we use a really fancy HD projector to show off each member’s solution.  It is very important to use an HD projector.  Again, don’t skimp!  Each code author speaks to their solution, everyone else comments, applauds, points fingers and laughs, etc.  As much as I’ve learned from solving the problems on my own, I’ve learned at least twice as much at the group code review.  6.  Rinse. Lather. Repeat.  We’ve hosted the language club for 7 weeks now.  The first meeting just set the stage.  The next two meetings provided a review of the languages followed by a first language selection.  The remaining meetings focused on Python and Project Euler problems.  Today we took a vote as to whether or not we’re ready to switch to another language and/or another problem set.  Pretty much everyone wants to stay the course for a few more weeks at least.  Until then, we’ll continue to code the next two solutions, review and learn. Again, we’ve been having a good time with the programming language club.  I’m glad it got off the ground.  What do you think?  Would you be interested in a language club?  Any suggestions on what we might do better?

    Read the article

  • Robotic Arm &ndash; Hardware

    - by Szymon Kobalczyk
    This is first in series of articles about project I've been building  in my spare time since last Summer. Actually it all began when I was researching a topic of modeling human motion kinematics in order to create gesture recognition library for Kinect. This ties heavily into motion theory of robotic manipulators so I also glanced at some designs of robotic arms. Somehow I stumbled upon this cool looking open source robotic arm: It was featured on Thingiverse and published by user jjshortcut (Jan-Jaap). Since for some time I got hooked on toying with microcontrollers, robots and other electronics, I decided to give it a try and build it myself. In this post I will describe the hardware build of the arm and in later posts I will be writing about the software to control it. Another reason to build the arm myself was the cost factor. Even small commercial robotic arms are quite expensive – products from Lynxmotion and Dagu look great but both cost around USD $300 (actually there is one cheap arm available but it looks more like a toy to me). In comparison this design is quite cheap. It uses seven hobby grade servos and even the cheapest ones should work fine. The structure is build from a set of laser cut parts connected with few metal spacers (15mm and 47mm) and lots of M3 screws. Other than that you’d only need a microcontroller board to drive the servos. So in total it comes a lot cheaper to build it yourself than buy an of the shelf robotic arm. Oh, and if you don’t like this one there are few more robotic arm projects at Thingiverse (including one by oomlout). Laser cut parts Some time ago I’ve build another robot using laser cut parts so I knew the process already. You can grab the design files in both DXF and EPS format from Thingiverse, and there are also 3D models of each part in STL. Actually the design is split into a second project for the mini servo gripper (there is also a standard servo version available but it won’t fit this arm).  I wanted to make some small adjustments, layout, and add measurements to the parts before sending it for cutting. I’ve looked at some free 2D CAD programs, and finally did all this work using QCad 3 Beta with worked great for me (I also tried LibreCAD but it didn’t work that well). All parts are cut from 4 mm thick material. Because I was worried that acrylic is too fragile and might break, I also ordered another set cut from plywood. In the end I build it from plywood because it was easier to glue (I was told acrylic requires a special glue). Btw. I found a great laser cutter service in Kraków and highly recommend it (www.ebbox.com.pl). It cost me only USD $26 for both sets ($16 acrylic + $10 plywood). Metal parts I bought all the M3 screws and nuts at local hardware store. Make sure to look for nylon lock (nyloc) nuts for the gripper because otherwise it unscrews and comes apart quickly. I couldn’t find local store with metal spacers and had to order them online (you’d need 11 x 47mm and 3 x 15mm). I think I paid less than USD $10 for all metal parts. Servos This arm uses five standards size servos to drive the arm itself, and two micro servos are used on the gripper. Author of the project used Modelcraft RS-2 Servo and Modelcraft ES-05 HT Servo. I had two Futaba S3001 servos laying around, and ordered additional TowerPro SG-5010 standard size servos and TowerPro SG90 micro servos. However it turned out that the SG90 won’t fit in the gripper so I had to replace it with a slightly smaller E-Sky EK2-0508 micro servo. Later it also turned out that Futaba servos make some strange noise while working so I swapped one with TowerPro SG-5010 which has higher torque (8kg / cm). I’ve also bought three servo extension cables. All servos cost me USD $45. Assembly The build process is not difficult but you need to think carefully about order of assembling it. You can do the base and upper arm first. Because two servos in the base are close together you need to put first with one piece of lower arm already connected before you put the second servo. Then you connect the upper arm and finally put the second piece of lower arm to hold it together. Gripper and base require some gluing so think it through too. Make sure to look closely at all the photos on Thingiverse (also other people copies) and read additional posts on jjshortcust’s blog: My mini servo grippers and completed robotic arm  Multiply the robotic arm and electronics Here is also Rob’s copy cut from aluminum My assembled arm looks like this – I think it turned out really nice: Servo controller board The last piece of hardware I needed was an electronic board that would take command from PC and drive all seven servos. I could probably use Arduino for this task, and in fact there are several Arduino servo shields available (for example from Adafruit or Renbotics).  However one problem is that most support only up to six servos, and second that their accuracy is limited by Arduino’s timer frequency. So instead I looked for dedicated servo controller and found a series of Maestro boards from Pololu. I picked the Pololu Mini Maestro 12-Channel USB Servo Controller. It has many nice features including native USB connection, high resolution pulses (0.25µs) with no jitter, built-in speed and acceleration control, and even scripting capability. Another cool feature is that besides servo control, each channel can be configured as either general input or output. So far I’m using seven channels so I still have five available to connect some sensors (for example distance sensor mounted on gripper might be useful). And last but important factor was that they have SDK in .NET – what more I could wish for! The board itself is very small – half of the size of Tic-Tac box. I picked one for about USD $35 in this store. Perhaps another good alternative would be the Phidgets Advanced Servo 8-Motor – but it is significantly more expensive at USD $87.30. The Maestro Controller Driver and Software package includes Maestro Control Center program with lets you immediately configure the board. For each servo I first figured out their move range and set the min/max limits. I played with setting the speed an acceleration values as well. Big issue for me was that there are two servos that control position of lower arm (shoulder joint), and both have to be moved at the same time. This is where the scripting feature of Pololu board turned out very helpful. I wrote a script that synchronizes position of second servo with first one – so now I only need to move one servo and other will follow automatically. This turned out tricky because I couldn’t find simple offset mapping of the move range for each servo – I had to divide it into several sub-ranges and map each individually. The scripting language is bit assembler-like but gets the job done. And there is even a runtime debugging and stack view available. Altogether I’m very happy with the Pololu Mini Maestro Servo Controller, and with this final piece I completed the build and was able to move my arm from the Meastro Control program.   The total cost of my robotic arm was: $10 laser cut parts $10 metal parts $45 servos $35 servo controller ----------------------- $100 total So here you have all the information about the hardware. In next post I’ll start talking about the software that I wrote in Microsoft Robotics Developer Studio 4. Stay tuned!

    Read the article

  • MSCC: Scripting - Administrator's­ toolbox of magic...

    Finally, we made it to have our April meetup - in May. The most obvious explanation is the increased amount of open source and IT activities that either the MSCC, the Linux User Group of Mauritius (LUGM), or the University of Mauritius Student's Computer Club is organising. It's absolutely incredible to see the recent hype of events here on the island. And I'm loving it! Unfortunately, we also had to deal with arranging for a location this time. It was kind of an odyssey as my requests (and phone calls) haven't been answered, even though I tried it several times - well, kind of disappointing and I have to look into that for future gatherings. In my opinion, it is essential that two parameters of a community meeting are fixed as early as possible: Location, and Date and time You can't just change one or both on the very last minute. Well, this time we had to do it due to unforeseen reasons, and I apologise to any MSCC member which couldn't make it to our April meetup. Okay, lesson learned but now back to the actual meetup report ... Shortly after the meeting I placed the following statement as my first impression: "Spontaneous and improvised :) No, seriously, Ish and Dan had well prepared presentations on shell scripting, mainly focused towards Bourne Again Shell (bash), and the pros and cons of scripting versus actually writing something in a decent programming language. I thought that I could cut myself out of the equation but the demand for information about PowerShell was higher than expected..." Well, it turned out that the interest in Windows PowerShell was high, as I even got a couple of questions on it via social media networks during the evening. I also like to mention that the number of attendees went back to what I would call a "standard" number of participation. This time there were 12 craftsmen, but again a good number of First Timers. Reactions of other attendees Here are some impressions and feedback from our participants: "Enjoyed the bash and powershell (linux / windows) presentations ..." -- Nadim on event comments "He [Daniel] also showed us some syntax loopholes in Bash that could leave someone with bad code." -- Ish on MSCC – Let's talk about Scripting   Glad to see a couple of first time attendees, especially students from the university itself. Some details on the presentations MSCC: First time visit at the University of Mauritius - Phase II Engineering Tower, room 2.9 Gimme some love ... bash and other shells Ish gave a great introduction into shell scripting as he spoke about existing shell environments and a little bit about their history. Furthermore, he talked about various built-in commands, the use of coreutils, the ability to daisy-chain multiple commands using pipes, the importance of the standard I/O streams and their file descriptors in advanced scripting techniques. Combined with a couple of sample statements in the Linux terminal on Ubuntu 14.04 machine it was a solid presentation. Have a closer look at his slides - published on his blog on MSCC – Let's talk about Scripting. Oddities of scripting After the brief introduction into bash it was Daniel's turn to highlight a good number of oddities when working with shell scripts. First of all, it should be clear that scripting is not supposed for any kind of implementations in terms of software but simply to automate administrative procedures and to simplify routine jobs on a system. One of the cool oddities that he mentioned is that everything (!) in a shell is represented by strings; there are no other types like integer, float, date-time, etc. that you'd like to use in a full-fledged programming language. Let's have a look at his sample:  more to come... What's the output? As a conclusion, Daniel suggests that shell scripting should be limited but not restricted to automatic repetitive command stacks and batch jobs, startup wrapper for applications in order to set up the execution environment, and other not too sophisticated jobs. But as soon as it might involve a little bit more logic or you might rely on performance it's better to write an application in Ruby, Python, or Perl (among others of course). This is also enables the possibility to test your code properly. MSCC: Ish talking about Bourne Again Shell (bash) and shell scripting to automate regular tasks MSCC: Daniel gives an overview about the pros and cons of shell scripting versus programming MSCC: PowerShell as your scripting solution on Windows operating systems The path of the Enlightened is long ... and tough. Honestly, even though PowerShell was mentioned without any further details on the meetup's agenda, I didn't expect that there would be demand to give a presentation on Microsoft PowerShell after all. I already took this topic out of the announcement but the audience wanted to have some information. Okay, then let's see what I could do - improvised style. While my machine booted and got hooked up to the projector, I started to talk about the beginnings of PowerShell from back in 2006, and its predecessors MS DOS and Command Prompt. A throwback in history... always good for young people. As usual, Microsoft didn't get it at that time. Instead of listening to their client's needs and demands they ignored the feasibility to administrate Windows server farms without any UI tools. PowerShell is actually a result of this, and seeing that shell scripting is a common, reliable and fast way in an administrator's toolbox for decades, Microsoft had to adapt from their Microsoft Management Console (MMC) to a broader approach. It's not like shell scripting was something new; it is in daily use by alternative operating systems like AIX, HP UX, Solaris, and last but not least Linux. Most interestingly, Microsoft is very good at renovating existing architectures, and over the years PowerShell not only replaced their own combination of Command Prompt and Scripting Hosts (VBScript and CScript) but really turned into a challenging competitor on the market. The shell is easy to extend with cmdlets, and open to other Microsoft products like SQL Server, SharePoint, as well as Third-party software applications. Similar to MMC PowerShell also offers the ability to administer other machine remotely - only without a graphical user interface and therefore it's easier to automate and schedule regular tasks. Following is a sample of a PowerShell script file (extension .ps1): $strComputer = "." $colItems = get-wmiobject -class Win32_BIOS -namespace root\CIMV2 -comp $strComputer foreach ($objItem in $colItems) {write-host "BIOS Characteristics: " $objItem.BiosCharacteristicswrite-host "BIOS Version: " $objItem.BIOSVersionwrite-host "Build Number: " $objItem.BuildNumberwrite-host "Caption: " $objItem.Captionwrite-host "Code Set: " $objItem.CodeSetwrite-host "Current Language: " $objItem.CurrentLanguagewrite-host "Description: " $objItem.Descriptionwrite-host "Identification Code: " $objItem.IdentificationCodewrite-host "Installable Languages: " $objItem.InstallableLanguageswrite-host "Installation Date: " $objItem.InstallDatewrite-host "Language Edition: " $objItem.LanguageEditionwrite-host "List Of Languages: " $objItem.ListOfLanguageswrite-host "Manufacturer: " $objItem.Manufacturerwrite-host "Name: " $objItem.Namewrite-host "Other Target Operating System: " $objItem.OtherTargetOSwrite-host "Primary BIOS: " $objItem.PrimaryBIOSwrite-host "Release Date: " $objItem.ReleaseDatewrite-host "Serial Number: " $objItem.SerialNumberwrite-host "SMBIOS BIOS Version: " $objItem.SMBIOSBIOSVersionwrite-host "SMBIOS Major Version: " $objItem.SMBIOSMajorVersionwrite-host "SMBIOS Minor Version: " $objItem.SMBIOSMinorVersionwrite-host "SMBIOS Present: " $objItem.SMBIOSPresentwrite-host "Software Element ID: " $objItem.SoftwareElementIDwrite-host "Software Element State: " $objItem.SoftwareElementStatewrite-host "Status: " $objItem.Statuswrite-host "Target Operating System: " $objItem.TargetOperatingSystemwrite-host "Version: " $objItem.Versionwrite-host} Which gives you information about your BIOS and Windows OS. Then change the computer name to another one on your network (NetBIOS based) and run the script again. There lots of samples and tutorials at the Microsoft Script Center, and I would advise you to pay a visit over there if you are more interested in PowerShell. The Script Center provides the download links, too. Upcoming Events What are the upcoming events here in Mauritius? So far, we have the following ones (incomplete list as usual) in chronological order: Hacking Defence (14. May 2014) WebCup Maurice (7. & 8. June 2014) Developers Conference (TBA ~ July 2014) Linuxfest 2014 (TBA ~ November 2014) Hopefully, there will be more announcements during the next couple of weeks and months. If you know about any other event, like a bootcamp, a code challenge or hackathon here in Mauritius, please drop me a note in the comment section below this article. Thanks! My resume of the day Spontaneous and improvised :) The new location at the University of Mauritius turned out very well, there is plenty of space, and it could be a good choice for future meetings. Especially, having the ability to get more and more students into our IT community sounds like a great opportunity. Later during the day, I got some promising mails from Nadim regarding future sessions at the local branch of the Middlesex University. Well, we will see in the future... But for now this will be on hold until approximately October when students resume their regular studies. Anyway, it was a good experience at the university, and thanks again to the UoM Student's Computer Club that made the necessary arrangements for the MSCC!

    Read the article

  • Asserting with JustMock

    - by mehfuzh
    In this post, i will be digging in a bit deep on Mock.Assert. This is the continuation from previous post and covers up the ways you can use assert for your mock expectations. I have used another traditional sample of Talisker that has a warehouse [Collaborator] and an order class [SUT] that will call upon the warehouse to see the stock and fill it up with items. Our sample, interface of warehouse and order looks similar to : public interface IWarehouse {     bool HasInventory(string productName, int quantity);     void Remove(string productName, int quantity); }   public class Order {     public string ProductName { get; private set; }     public int Quantity { get; private set; }     public bool IsFilled { get; private set; }       public Order(string productName, int quantity)     {         this.ProductName = productName;         this.Quantity = quantity;     }       public void Fill(IWarehouse warehouse)     {         if (warehouse.HasInventory(ProductName, Quantity))         {             warehouse.Remove(ProductName, Quantity);             IsFilled = true;         }     }   }   Our first example deals with mock object assertion [my take] / assert all scenario. This will only act on the setups that has this “MustBeCalled” flag associated. To be more specific , let first consider the following test code:    var order = new Order(TALISKER, 0);    var wareHouse = Mock.Create<IWarehouse>();      Mock.Arrange(() => wareHouse.HasInventory(Arg.Any<string>(), 0)).Returns(true).MustBeCalled();    Mock.Arrange(() => wareHouse.Remove(Arg.Any<string>(), 0)).Throws(new InvalidOperationException()).MustBeCalled();    Mock.Arrange(() => wareHouse.Remove(Arg.Any<string>(), 100)).Throws(new InvalidOperationException());      //exercise    Assert.Throws<InvalidOperationException>(() => order.Fill(wareHouse));    // it will assert first and second setup.    Mock.Assert(wareHouse); Here, we have created the order object, created the mock of IWarehouse , then I setup our HasInventory and Remove calls of IWarehouse with my expected, which is called by the order.Fill internally. Now both of these setups are marked as “MustBeCalled”. There is one additional IWarehouse.Remove that is invalid and is not marked.   On line 9 ,  as we do order.Fill , the first and second setups will be invoked internally where the third one is left  un-invoked. Here, Mock.Assert will pass successfully as  both of the required ones are called as expected. But, if we marked the third one as must then it would fail with an  proper exception. Here, we can also see that I have used the same call for two different setups, this feature is called sequential mocking and will be covered later on. Moving forward, let’s say, we don’t want this must call, when we want to do it specifically with lamda. For that let’s consider the following code: //setup - data var order = new Order(TALISKER, 50); var wareHouse = Mock.Create<IWarehouse>();   Mock.Arrange(() => wareHouse.HasInventory(TALISKER, 50)).Returns(true);   //exercise order.Fill(wareHouse);   //verify state Assert.True(order.IsFilled); //verify interaction Mock.Assert(()=> wareHouse.HasInventory(TALISKER, 50));   Here, the snippet shows a case for successful order, i haven’t used “MustBeCalled” rather i used lamda specifically to assert the call that I have made, which is more justified for the cases where we exactly know the user code will behave. But, here goes a question that how we are going assert a mock call if we don’t know what item a user code may request for. In that case, we can combine the matchers with our assert calls like we do it for arrange: //setup - data  var order = new Order(TALISKER, 50);  var wareHouse = Mock.Create<IWarehouse>();    Mock.Arrange(() => wareHouse.HasInventory(TALISKER, Arg.Matches<int>( x => x <= 50))).Returns(true);    //exercise  order.Fill(wareHouse);    //verify state  Assert.True(order.IsFilled);    //verify interaction  Mock.Assert(() => wareHouse.HasInventory(Arg.Any<string>(), Arg.Matches<int>(x => x <= 50)));   Here, i have asserted a mock call for which i don’t know the item name,  but i know that number of items that user will request is less than 50.  This kind of expression based assertion is now possible with JustMock. We can extent this sample for properties as well, which will be covered shortly [in other posts]. In addition to just simple assertion, we can also use filters to limit to times a call has occurred or if ever occurred. Like for the first test code, we have one setup that is never invoked. For such, it is always valid to use the following assert call: Mock.Assert(() => wareHouse.Remove(Arg.Any<string>(), 100), Occurs.Never()); Or ,for warehouse.HasInventory we can do the following: Mock.Assert(() => wareHouse.HasInventory(Arg.Any<string>(), 0), Occurs.Once()); Or,  to be more specific, it’s even better with: Mock.Assert(() => wareHouse.HasInventory(Arg.Any<string>(), 0), Occurs.Exactly(1));   There are other filters  that you can apply here using AtMost, AtLeast and AtLeastOnce but I left those to the readers. You can try the above sample that is provided in the examples shipped with JustMock.Please, do check it out and feel free to ping me for any issues.   Enjoy!!

    Read the article

  • Silverlight Recruiting Application Part 4 - Navigation and Modules

    After our brief intermission (and the craziness of Q1 2010 release week), we're back on track here and today we get to dive into how we are going to navigate through our applications as well as how to set up our modules. That way, as I start adding the functionality- adding Jobs and Applicants, Interview Scheduling, and finally a handy Dashboard- you'll see how everything is communicating back and forth. This is all leading up to an eventual webinar, in which I'll dive into this process and give a honest look at the current story for MVVM vs. Code-Behind applications. (For a look at the future with SL4 and a little thing called MEF, check out what Ross is doing over at his blog!) Preamble... Before getting into really talking about this app, I've done a little bit of work ahead of time to create a ton of files that I'll need. Since the webinar is going to cover the Dashboard, it's not here, but otherwise this is a look at what the project layout looks like (and remember, this is both projects since they share the .Web): So as you can see, from an architecture perspective, the code-behind app is much smaller and more streamlined- aka a better fit for the one man shop that is me. Each module in the MVVM app has the same setup, which is the Module class and corresponding Views and ViewModels. Since the code-behind app doesn't need a go-between project like Infrastructure, each MVVM module is instead replaced by a single Silverlight UserControl which will contain all the logic for each respective bit of functionality. My Very First Module Navigation is going to be key to my application, so I figured the first thing I would setup is my MenuModule. First step here is creating a Silverlight Class Library named MenuModule, creatingthe View and ViewModel folders, and adding the MenuModule.cs class to handle module loading. The most important thing here is that my MenuModule inherits from IModule, which runs an Initialize on each module as it is created that, in my case, adds the views to the correct regions. Here's the MenuModule.cs code: public class MenuModule : IModule { private readonly IRegionManager regionManager; private readonly IUnityContainer container; public MenuModule(IUnityContainer container, IRegionManager regionmanager) { this.container = container; this.regionManager = regionmanager; } public void Initialize() { var addMenuView = container.Resolve<MenuView>(); regionManager.Regions["MenuRegion"].Add(addMenuView); } } Pretty straightforward here... We inject a container and region manager from Prism/Unity, then upon initialization we grab the view (out of our Views folder) and add it to the region it needs to live in. Simple, right? When the MenuView is created, the only thing in the code-behind is a reference to the set the MenuViewModel as the DataContext. I'd like to achieve MVVM nirvana and have zero code-behind by placing the viewmodel in the XAML, but for the reasons listed further below I can't. Navigation - MVVM Since navigation isn't the biggest concern in putting this whole thing together, I'm using the Button control to handle different options for loading up views/modules. There is another reason for this- out of the box, Prism has command support for buttons, which is one less custom command I had to work up for the functionality I would need. This comes from the Microsoft.Practices.Composite.Presentation assembly and looks as follows when put in code: <Button x:Name="xGoToJobs" Style="{StaticResource menuStyle}" Content="Jobs" cal:Click.Command="{Binding GoModule}" cal:Click.CommandParameter="JobPostingsView" /> For quick reference, 'menuStyle' is just taking care of margins and spacing, otherwise it looks, feels, and functions like everyone's favorite Button. What MVVM's this up is that the Click.Command is tying to a DelegateCommand (also coming fromPrism) on the backend. This setup allows you to tie user interaction to a command you setup in your viewmodel, which replaces the standard event-based setup you'd see in the code-behind app. Due to databinding magic, it all just works. When we get looking at the DelegateCommand in code, it ends up like this: public class MenuViewModel : ViewModelBase { private readonly IRegionManager regionManager; public DelegateCommand<object> GoModule { get; set; } public MenuViewModel(IRegionManager regionmanager) { this.regionManager = regionmanager; this.GoModule = new DelegateCommand<object>(this.goToView); } public void goToView(object obj) { MakeMeActive(this.regionManager, "MainRegion", obj.ToString()); } } Another for reference, ViewModelBase takes care of iNotifyPropertyChanged and MakeMeActive, which switches views in the MainRegion based on the parameters. So our public DelegateCommand GoModule ties to our command on the view, that in turn calls goToView, and the parameter on the button is the name of the view (which we pass with obj.ToString()) to activate. And how do the views get the names I can pass as a string? When I called regionManager.Regions[regionname].Add(view), there is an overload that allows for .Add(view, "viewname"), with viewname being what I use to activate views. You'll see that in action next installment, just wanted to clarify how that works. With this setup, I create two more buttons in my MenuView and the MenuModule is good to go. Last step is to make sure my MenuModule loads in my Bootstrapper: protected override IModuleCatalog GetModuleCatalog() { ModuleCatalog catalog = new ModuleCatalog(); // add modules here catalog.AddModule(typeof(MenuModule.MenuModule)); return catalog; } Clean, simple, MVVM-delicious. Navigation - Code-Behind Keeping with the history of significantly shorter code-behind sections of this series, Navigation will be no different. I promise. As I explained in a prior post, due to the one-project setup I don't have to worry about the same concerns so my menu is part of MainPage.xaml. So I can cheese-it a bit, though, since I've already got three buttons all set I'm just copying that code and adding three click-events instead of the command/commandparameter setup: <!-- Menu Region --> <StackPanel Grid.Row="1" Orientation="Vertical"> <Button x:Name="xJobsButton" Content="Jobs" Style="{StaticResource menuStyleCB}" Click="xJobsButton_Click" /> <Button x:Name="xApplicantsButton" Content="Applicants" Style="{StaticResource menuStyleCB}" Click="xApplicantsButton_Click" /> <Button x:Name="xSchedulingModule" Content="Scheduling" Style="{StaticResource menuStyleCB}" Click="xSchedulingModule_Click" /> </StackPanel> Simple, easy to use events, and no extra assemblies required! Since the code for loading each view will be similar, we'll focus on JobsView for now.The code-behind with this setup looks something like... private JobsView _jobsView; public MainPage() { InitializeComponent(); } private void xJobsButton_Click(object sender, RoutedEventArgs e) { if (MainRegion.Content.GetType() != typeof(JobsView)) { if (_jobsView == null) _jobsView = new JobsView(); MainRegion.Content = _jobsView; } } What am I doing here? First, for each 'view' I create a private reference which MainPage will hold on to. This allows for a little bit of state-maintenance when switching views. When a button is clicked, first we make sure the 'view' typeisn't active (why load it again if it is already at center stage?), then we check if the view has been created and create if necessary, then load it up. Three steps to switching views and is easy as pie. Part 4 Results The end result of all this is that I now have a menu module (MVVM) and a menu section (code-behind) that load their respective views. Since I'm using the same exact XAML (except with commands/events depending on the project), the end result for both is again exactly the same and I'll show a slightly larger image to show it off: Next time, we add the Jobs Module and wire up RadGridView and a separate edit page to handle adding and editing new jobs. That's when things get fun. And somewhere down the line, I'll make the menu look slicker. :) Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • CodePlex Daily Summary for Saturday, March 27, 2010

    CodePlex Daily Summary for Saturday, March 27, 2010New ProjectsAlter gear SQL index Management: SQL Index management displays a list of indexes available for the chosen database and allows you to select an individual / group of indexes to be r...ASP League Ladder System: An ASP ladder / league system for online gaming league or real life leagues also.Augmented Reality Strategy Simulator: Augmented Reality Strategy Simulator is a software suite to promote computer aided strategy planning. Sports team can visualize their strategy usin...Boo syntax highlighting for Visual Studio 2010: Simple syntax hightlighting VSX add-in for Boo language in Visual Studio 2010.easySan: easySan zur einfachen Mitgliedsverwaltung im BRKFsUnit: FsUnit makes unit-testing with F# more enjoyable. It adds a special syntax to your favorite .NET testing framework.Laughing Dog XNA Framework: Laughing Dog is a simple to use, component based 2D framework for XNA game development. At present it is very early in development and as such is f...miniTodo: WPFでMVVMの練習にてきとうに作ったTODOアプリ 実用は無理です。My Common Library on .NET with CSharp: My Common Library on .NET with CSharp, it conclude database assecc, encrypt string, data caching, StringUtility, thank you for your view.Native code wrapping using c# : fsutil sparse commands: Ever thought about creating HUGE FILES for future use but felt bad for the wasted memory? Well, SPARSE FILES are the ANSWER! This FSUTIL SPARSE CO...Open SOA Platform: A centralized system for administering applications throught a SOA Enterprise Service Bus: Runtime environment (PROD, DEV, ...) , application and s...P-DBMS: Network and Database ProjectPraiseSight: PraiseSight is supposed to become a practical tool for churches to catalog an present their songs, lyrics and presentations on a beamer. The soluti...Pretty Good Frontend: Pretty Good Frontend is a sample frontend for ConfigMgr (SCCM) 2007 and MDT 2010 Zero Touch. S3Appender (Appender for Log4Net that Uses Amazon S3 For Storing Log Files): The S3Appender is a log4net appender that stores log events in either a MemoryStream or FileStream and sends them to S3 based on time intervals and...sEmit: sEmit (sms emitter) is an application written in C# which was built to send text messages. The project was founded in May 2009 by cansik. It works ...Silverlight RIA Tools: A tool set that generates a full RIA Solutions in Silverlightthommo cannon: Cannon for shooting down ThommosTianjin Polytechnic University Online Judge: Online Judge System Built on Microsoft technologies. Vision & Scope: A distributed OJ Solution on Windows and Cloud. Technologies used or planed...Tinare: Tinare is an byte encryption and decryption alogrithm. The input key is a string password.TinyPlug: Small Plugin Manager, written in C# Allows a project to define supported interfaces, and at runtime add plugins which support (inherit) these in...Utility niconv helps to convert text from one encoding to another: .NET implementation of GUN iconv console converter utility. The niconv program converts text from one encoding to another encoding. In the future r...WareFeed - Software Business Analytics: WareFeed is a simple but effective Software Business Analytics tool written in PHP and compatible others languages such as .NET, Java or Python. It...Y36API1: Semestralni projekt na Y36APINew ReleasesAlter gear SQL index Management: Setup 1.0.0: setup for first alpha releaseASP League Ladder System: ASPLeagueRelease_0_4_1: Release v 0.41Augmented Reality Strategy Simulator: Augmented Reality Strategy Simulator: Version 1.0 InstallerAutoAudit: AutoAudit 1.10e: Version 1.10e will be the final iteration of version 1 development. Version 2 will begin adding switches and options. Pleae email your suggestio...Boo syntax highlighting for Visual Studio 2010: Boo syntax VS 2010 - alpha: First release TODO: Multiline comments!Chargify.NET: Chargify.NET 0.6: Updated library, using Metered Components and updated Product information.Composer: V1.0.326.1000 Alpha: Initial Alpha release. Should be stable, with minor issues.CoNatural Components: CoNatural Components 1.6: Code fixes: Created helper classes to generate source code for type mapper/materializer. Fixed issue in optimized type materializer when loading ...CRM External View: 1.2: New Features in v1.2 release Password protected views. No more using Web Data Access role from v1. Filtering capabilities Caching for performan...Designit Video Embed Package: Release 1.1.0 beta1: You can now either have the video embeded directly in the template or have a preview in template that opens the video in a lightbox window.FsUnit: FsUnit 0.9.0 for NUnit: This release is for F# 2.0 and NUnit 2.5+.Laughing Dog XNA Framework: Laughing Dog 0.0.1: Laughing Dog - Alpla - v 0.0.1 First released version of the Laughing Dog framework.LiveUpload to Facebook: LiveUpload to Facebook 3.2: Version 3.2Become a fan on Facebook! Features Quickly and easily upload your photos and videos to Facebook, including any people tags added in Win...MapWindow6: MapWindow 6.0 msi March 26: This version adds the Join feature for creating a new "featureset" with attributes that are joined with attributes from a Excel data label named 'D...Mobile Broadband Logging Monitor: Mobile Broadband Logging Monitor 1.2.2: This edition supports: Newer and older editions of Birdstep Technology's EasyConnect HUAWEI Mobile Partner MWConn User defined location for s...Multiplayer Quiz: Release 1_6_351_0: A beta release of the next version. Please leave any errors in discussions or comments.Native code wrapping using c# : fsutil sparse commands: Fsutil sparse file native code - c sharp wrapper: Project Description A C# code wrapping a native code-Sparse files1 The code is about SPARSE files- the abillity to create huge files (for future us...Nice Libraries: 1.30 build 50325.01: Release 1.30 build 50325.01Pretty Good Frontend: Pretty Good Frontend binaries v1.0: This is the first public release of the Pretty Good Frontend binariesPylor: Pylor 0.1 alpha: This is the very first published version. I hope I can put a sample project soon.Quick Performance Monitor: Version 1.1 refresh: There was a typo or two in the sample batch file. Corrected now.Rapidshare Episode Downloader: RED v0.8.3: 0.8.1 introduced the ability to advance to the next episode. In 0.8.2 a bug was found that if episode number is less then 10, then the preceding 0...RapidWebDev - .NET Enterprise Software Development Infrastructure: RapidWebDev 1.52: RapidWebDev is an infrastructure helps to develop enterprise software solutions in Microsoft .NET easily and productively. This is the release vers...thommo cannon: game: gamethommo cannon: setup: setupthommo cannon: test: testTinare: Tinare DLL: Tinare DLL is a dynamic-link library written in C# which provides the functions to encrypt and decrypt a byte stream with tinare.WeatherBar: WeatherBar 2.1 [No Installation]: Minor changes to release 2.0 (http://weatherbar.codeplex.com/releases/view/42490). Fixed the bug that caused an exception to be thrown if the user...Most Popular ProjectsMetaSharpRawrWBFS ManagerASP.NET Ajax LibrarySilverlight ToolkitMicrosoft SQL Server Product Samples: DatabaseAJAX Control ToolkitLiveUpload to FacebookWindows Presentation Foundation (WPF)ASP.NETMost Active ProjectsRawrjQuery Library for SharePoint Web ServicesBlogEngine.NETMicrosoft Biology FoundationFarseer Physics Enginepatterns & practices: Composite WPF and SilverlightLINQ to TwitterTable2ClassFluent Ribbon Control SuiteNB_Store - Free DotNetNuke Ecommerce Catalog Module

    Read the article

  • Segfault when iterating over a map<string, string> and drawing its contents using SDL_TTF

    - by Michael Stahre
    I'm not entirely sure this question belongs on gamedev.stackexchange, but I'm technically working on a game and working with SDL, so it might not be entirely offtopic. I've written a class called DebugText. The point of the class is to have a nice way of printing values of variables to the game screen. The idea is to call SetDebugText() with the variables in question every time they change or, as is currently the case, every time the game's Update() is called. The issue is that when iterating over the map that contains my variables and their latest updated values, I get segfaults. See the comments in DrawDebugText() below, it specifies where the error happens. I've tried splitting the calls to it-first and it-second into separate lines and found that the problem doesn't always happen when calling it-first. It alters between it-first and it-second. I can't find a pattern. It doesn't fail on every call to DrawDebugText() either. It might fail on the third time DrawDebugText() is called, or it might fail on the fourth. Class header: #ifndef CLIENT_DEBUGTEXT_H #define CLIENT_DEBUGTEXT_H #include <Map> #include <Math.h> #include <sstream> #include <SDL.h> #include <SDL_ttf.h> #include "vector2.h" using std::string; using std::stringstream; using std::map; using std::pair; using game::Vector2; namespace game { class DebugText { private: TTF_Font* debug_text_font; map<string, string>* debug_text_list; public: void SetDebugText(string var, bool value); void SetDebugText(string var, float value); void SetDebugText(string var, int value); void SetDebugText(string var, Vector2 value); void SetDebugText(string var, string value); int DrawDebugText(SDL_Surface*, SDL_Rect*); void InitDebugText(); void Clear(); }; } #endif Class source file: #include "debugtext.h" namespace game { // Copypasta function for handling the toString conversion template <class T> inline string to_string (const T& t) { stringstream ss (stringstream::in | stringstream::out); ss << t; return ss.str(); } // Initializes SDL_TTF and sets its font void DebugText::InitDebugText() { if(TTF_WasInit()) TTF_Quit(); TTF_Init(); debug_text_font = TTF_OpenFont("LiberationSans-Regular.ttf", 16); TTF_SetFontStyle(debug_text_font, TTF_STYLE_NORMAL); } // Iterates over the current debug_text_list and draws every element on the screen. // After drawing with SDL you need to get a rect specifying the area on the screen that was changed and tell SDL that this part of the screen needs to be updated. this is done in the game's Draw() function // This function sets rects_to_update to the new list of rects provided by all of the surfaces and returns the number of rects in the list. These two parameters are used in Draw() when calling on SDL_UpdateRects(), which takes an SDL_Rect* and a list length int DebugText::DrawDebugText(SDL_Surface* screen, SDL_Rect* rects_to_update) { if(debug_text_list == NULL) return 0; if(!TTF_WasInit()) InitDebugText(); rects_to_update = NULL; // Specifying the font color SDL_Color font_color = {0xff, 0x00, 0x00, 0x00}; // r, g, b, unused int row_count = 0; string line; // The iterator variable map<string, string>::iterator it; // Gets the iterator and iterates over it for(it = debug_text_list->begin(); it != debug_text_list->end(); it++) { // Takes the first value (the name of the variable) and the second value (the value of the parameter in string form) //---------THIS LINE GIVES ME SEGFAULTS----- line = it->first + ": " + it->second; //------------------------------------------ // Creates a surface with the text on it that in turn can be rendered to the screen itself later SDL_Surface* debug_surface = TTF_RenderText_Solid(debug_text_font, line.c_str(), font_color); if(debug_surface == NULL) { // A standard check for errors fprintf(stderr, "Error: %s", TTF_GetError()); return NULL; } else { // If SDL_TTF did its job right, then we now set a destination rect row_count++; SDL_Rect dstrect = {5, 5, 0, 0}; // x, y, w, h dstrect.x = 20; dstrect.y = 20*row_count; // Draws the surface with the text on it to the screen int res = SDL_BlitSurface(debug_surface,NULL,screen,&dstrect); if(res != 0) { //Just an error check fprintf(stderr, "Error: %s", SDL_GetError()); return NULL; } // Creates a new rect to specify the area that needs to be updated with SDL_Rect* new_rect_to_update = (SDL_Rect*) malloc(sizeof(SDL_Rect)); new_rect_to_update->h = debug_surface->h; new_rect_to_update->w = debug_surface->w; new_rect_to_update->x = dstrect.x; new_rect_to_update->y = dstrect.y; // Just freeing the surface since it isn't necessary anymore SDL_FreeSurface(debug_surface); // Creates a new list of rects with room for the new rect SDL_Rect* newtemp = (SDL_Rect*) malloc(row_count*sizeof(SDL_Rect)); // Copies the data from the old list of rects to the new one memcpy(newtemp, rects_to_update, (row_count-1)*sizeof(SDL_Rect)); // Adds the new rect to the new list newtemp[row_count-1] = *new_rect_to_update; // Frees the memory used by the old list free(rects_to_update); // And finally redirects the pointer to the old list to the new list rects_to_update = newtemp; newtemp = NULL; } } // When the entire map has been iterated over, return the number of lines that were drawn, ie. the number of rects in the returned rect list return row_count; } // The SetDebugText used by all the SetDebugText overloads // Takes two strings, inserts them into the map as a pair void DebugText::SetDebugText(string var, string value) { if (debug_text_list == NULL) { debug_text_list = new map<string, string>(); } debug_text_list->erase(var); debug_text_list->insert(pair<string, string>(var, value)); } // Writes the bool to a string and calls SetDebugText(string, string) void DebugText::SetDebugText(string var, bool value) { string result; if (value) result = "True"; else result = "False"; SetDebugText(var, result); } // Does the same thing, but uses to_string() to convert the float void DebugText::SetDebugText(string var, float value) { SetDebugText(var, to_string(value)); } // Same as above, but int void DebugText::SetDebugText(string var, int value) { SetDebugText(var, to_string(value)); } // Vector2 is a struct of my own making. It contains the two float vars x and y void DebugText::SetDebugText(string var, Vector2 value) { SetDebugText(var + ".x", to_string(value.x)); SetDebugText(var + ".y", to_string(value.y)); } // Empties the list. I don't actually use this in my code. Shame on me for writing something I don't use. void DebugText::Clear() { if(debug_text_list != NULL) debug_text_list->clear(); } }

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >