Search Results

Search found 1010 results on 41 pages for 'reuse'.

Page 9/41 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Reverse engineering and redistributing code from .NET Framework

    - by ToxicAvenger
    Once or twice I have been running into the following issue: Classes I want to reuse in my applications (and possibly redistribute) exist in the .NET Framework assemblies, but are marked internal or private. So it is impossible to reuse them directly. One way is to disassemble them, pick the pieces you need, put them in a different namespace, recompile (this can be some effort, but usually works quite well). My question is: Is this legal? Is this only legal for the classes of the Framework which are available as source code anyway? Is it illegal? I think that Microsoft marks them internal or private primarily so that they don't have to support them or can change the interfaces later. But some pieces - be it SharePoint or WCF - are almost impossible to properly extend by only using public classes from the apis. And rewriting everything from scratch generates a huge amount of effort, before you even start solving the problem you intended to solve. This is in my eyes not a "dirty" approach per se. The classes Microsoft ships are obviously well tested, if I reuse them under a different namespace I have "control" over them. If Microsoft changes the original implementation, my code won't be affected (some internals in WCF changed quite a bit with v4). It is not a super-clean approach. I would much prefer Microsoft making several classes public, because there are some nice classes hidden inside the framework.

    Read the article

  • Is it possible to save the product key of Windows 8?

    - by Dibya Ranjan
    I have Windows 8 activated in my system. I don't have the product key of windows right now. Now I want to format my system again. Is there any way so that I can reuse the key? Is there any way I can get the key from an activated windows machine? Edit: I am not able to find the product key because I have used a MAK as my product key. Now I want the same to use it after formatting my disk. I found a software Volume Activation Manager tool on the windows website. I am not sure how to use it. Please tell me how can I reuse my key?

    Read the article

  • PowerShell Script to Deploy Multiple VM on Azure in Parallel #azure #powershell

    - by Marco Russo (SQLBI)
    This blog is usually dedicated to Business Intelligence and SQL Server, but I didn’t found easily on the web simple PowerShell scripts to help me deploying a number of virtual machines on Azure that I use for testing and development. Since I need to deploy, start, stop and remove many virtual machines created from a common image I created (you know, Tabular is not part of the standard images provided by Microsoft…), I wanted to minimize the time required to execute every operation from my Windows Azure PowerShell console (but I suggest you using Windows PowerShell ISE), so I also wanted to fire the commands as soon as possible in parallel, without losing the result in the console. In order to execute multiple commands in parallel, I used the Start-Job cmdlet, and using Get-Job and Receive-Job I wait for job completion and display the messages generated during background command execution. This technique allows me to reduce execution time when I have to deploy, start, stop or remove virtual machines. Please note that a few operations on Azure acquire an exclusive lock and cannot be really executed in parallel, but only one part of their execution time is subject to this lock. Thus, you obtain a better response time also in these scenarios (this is the case of the provisioning of a new VM). Finally, when you remove the VMs you still have the disk containing the virtual machine to remove. This cannot be done just after the VM removal, because you have to wait that the removal operation is completed on Azure. So I wrote a script that you have to run a few minutes after VMs removal and delete disks (and VHD) no longer related to a VM. I just check that the disk were associated to the original image name used to provision the VMs (so I don’t remove other disks deployed by other batches that I might want to preserve). These examples are specific for my scenario, if you need more complex configurations you have to change and adapt the code. But if your need is to create multiple instances of the same VM running in a workgroup, these scripts should be good enough. I prepared the following PowerShell scripts: ProvisionVMs: Provision many VMs in parallel starting from the same image. It creates one service for each VM. RemoveVMs: Remove all the VMs in parallel – it also remove the service created for the VM StartVMs: Starts all the VMs in parallel StopVMs: Stops all the VMs in parallel RemoveOrphanDisks: Remove all the disks no longer used by any VMs. Run this script a few minutes after RemoveVMs script. ProvisionVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   # Name of storage account (where VMs will be deployed) $StorageAccount = "Copy the Label property you get from Get-AzureStorageAccount"   function ProvisionVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName) $Location = "Copy the Location property you get from Get-AzureStorageAccount" $InstanceSize = "A5" # You can use any other instance, such as Large, A6, and so on $AdminUsername = "UserName" # Write the name of the administrator account in the new VM $Password = "Password"      # Write the password of the administrator account in the new VM $Image = "Copy the ImageName property you get from Get-AzureVMImage" # You can list your own images using the following command: # Get-AzureVMImage | Where-Object {$_.PublisherName -eq "User" }         New-AzureVMConfig -Name $VmName -ImageName $Image -InstanceSize $InstanceSize |             Add-AzureProvisioningConfig -Windows -Password $Password -AdminUsername $AdminUsername|             New-AzureVM -Location $Location -ServiceName "$VmName" -Verbose     } }   # Set the proper storage - you might remove this line if you have only one storage in the subscription Set-AzureSubscription -SubscriptionName $SubscriptionName -CurrentStorageAccount $StorageAccount   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list provisions one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed ProvisionVM "test10" ProvisionVM "test11" ProvisionVM "test12" ProvisionVM "test13" ProvisionVM "test14" ProvisionVM "test15" ProvisionVM "test16" ProvisionVM "test17" ProvisionVM "test18" ProvisionVM "test19" ProvisionVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup of jobs Remove-Job *   # Displays batch completed echo "Provisioning VM Completed" RemoveVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function RemoveVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Remove-AzureService -ServiceName $VmName -Force -Verbose     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list remove one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed RemoveVM "test10" RemoveVM "test11" RemoveVM "test12" RemoveVM "test13" RemoveVM "test14" RemoveVM "test15" RemoveVM "test16" RemoveVM "test17" RemoveVM "test18" RemoveVM "test19" RemoveVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Remove VM Completed" StartVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function StartVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Start-AzureVM -Name $VmName -ServiceName $VmName -Verbose     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list starts one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed StartVM "test10" StartVM "test11" StartVM "test11" StartVM "test12" StartVM "test13" StartVM "test14" StartVM "test15" StartVM "test16" StartVM "test17" StartVM "test18" StartVM "test19" StartVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Start VM Completed"   StopVMs # Name of subscription $SubscriptionName = "Copy the SubscriptionName property you get from Get-AzureSubscription"   function StopVM( [string]$VmName ) {     Start-Job -ArgumentList $VmName {         param($VmName)         Stop-AzureVM -Name $VmName -ServiceName $VmName -Verbose -Force     } }   # Select the subscription - this line is fundamental if you have access to multiple subscription # You might remove this line if you have only one subscription Select-AzureSubscription -SubscriptionName $SubscriptionName   # Every line in the following list stops one VM using the name specified in the argument # You can change the number of lines - use a unique name for every VM - don't reuse names # already used in other VMs already deployed StopVM "test10" StopVM "test11" StopVM "test12" StopVM "test13" StopVM "test14" StopVM "test15" StopVM "test16" StopVM "test17" StopVM "test18" StopVM "test19" StopVM "test20"   # Wait for all to complete While (Get-Job -State "Running") {     Get-Job -State "Completed" | Receive-Job     Start-Sleep 1 }   # Display output from all jobs Get-Job | Receive-Job   # Cleanup Remove-Job *   # Displays batch completed echo "Stop VM Completed" RemoveOrphanDisks $Image = "Copy the ImageName property you get from Get-AzureVMImage" # You can list your own images using the following command: # Get-AzureVMImage | Where-Object {$_.PublisherName -eq "User" }   # Remove all orphan disks coming from the image specified in $ImageName Get-AzureDisk |     Where-Object {$_.attachedto -eq $null -and $_.SourceImageName -eq $ImageName} |     Remove-AzureDisk -DeleteVHD -Verbose  

    Read the article

  • SOA Checklist

    - by pat.shepherd
    In a recent meeting, the customer brought up a valid question: “How do I know if a problem/system is a good candidate for using SOA (vs. using old but trusted techniques).  I put this checklist together.  If you can answer yes to 2 or more of these, it might well be a good candidate.  This is V1, and I will likely update it over time.  Comments (that are not spam or sales pitches) appreciated. Part of the conversation was also around the fact that SOA has two faces to it; one is around the obvious reuse possibilities. The other, that often gets forgotten, is that SOA provides goodness in terms of simplifying integration even where opportunities to reuse those integrations are small; at least the integrations are standards-based and more flexible.  I did not write a lot of verbiage about each of them, for example “Business Process” implies that there is a set of step-wise actions that need to take place in a coordinated fashion that include integrating with systems (and sometimes people for approvals and other human-only actions) in the process.  

    Read the article

  • Is it OK to use dynamic typing to reduce the amount of variables in scope?

    - by missingno
    Often, when I am initializing something I have to use a temporary variable, for example: file_str = "path/to/file" file_file = open(file) or regexp_parts = ['foo', 'bar'] regexp = new RegExp( regexp_parts.join('|') ) However, I like to reduce the scope my variables to the smallest scope possible so there is less places where they can be (mis-)used. For example, I try to use for(var i ...) in C++ so the loop variable is confined to the loop body. In these initialization cases, if I am using a dynamic language, I am then often tempted to reuse the same variable in order to prevent the initial (and now useless) value from being used latter in the function. file = "path/to/file" file = open(file) regexp = ['...', '...'] regexp = new RegExp( regexp.join('|') ) The idea is that by reducing the number of variables in scope I reduce the chances to misuse them. However this sometimes makes the variable names look a little weird, as in the first example, where "file" refers to a "filename". I think perhaps this would be a non issue if I could use non-nested scopes begin scope1 filename = ... begin scope2 file = open(filename) end scope1 //use file here //can't use filename on accident end scope2 but I can't think of any programming language that supports this. What rules of thumb should I use in this situation? When is it best to reuse the variable? When is it best to create an extra variable? What other ways do we solve this scope problem?

    Read the article

  • ArchBeat Link-o-Rama for October 18, 2013

    - by OTN ArchBeat
    Enriching XMLType data using relational data – XQuery and fn:collection in action | Lucas Jellema Another detailed technical post from the always prolific Lucas Jellema. Evil Behind ChangeEventPolicy PPR in CRUD ADF 12c and WebLogic Stuck Threads | Andrejus Baranovskis The latest post from Oracle ACE Director Andrejus Baranovskis is a bit of a preview of his presentation at the upcoming UKOUG 2013 event. Podcast: Interview with authors of "Hudson Continuous Integration in Practice" For your listening pleasure... Here's an Oracle Author Podcast Interview with "Hudson Continuous Integration in Practice" authors Ed Burns and Winston Prakash. Manual Recovery Mechanisms in SOA Suite and AIA | Shreenidhi Raghuram Solution architect Shreenidhi Raghuram's post combines information from several sources to provide "a quick reference for Manual Recovery of Faults within the SOA and AIA contexts." Event: Harnessing Oracle Weblogic and Oracle Coherence This OTN Virtual Developer Day event features eight sessions in two tracks, with presentations and hands-on labs for developers and architects delivered by experts in Weblogic, Coherence, and ADF. Registration is free. November 5th, 2013. 9am-1pm PT / 12pm-4pm ET / 1pm-5pm BRT Podcast: IoT Challenges and Opportunities - Part 2 Part 2 of the OTN ArchBeat Internet of Things podcast features a roundtable discussion of IoT challenges: massive data streams, security and privacy issues, evolving standards and protocols. Listen! Video: Design - ADF Architectural Patterns - Two for One Deal | Chris Muir Chris Muir explores the reuse of BTF workspaces across multiple applications and the advantages and disadvantages of reuse at the application level. Thought for the Day "Can't nothing make your life work if you ain't the architect." — Terry McMillan, American author (Born October 18, 1951) Source: brainyquote.com

    Read the article

  • How to plan/manage multi-platform (mobile) products?

    - by PhD
    Say I've to develop an app that runs on iOS, Android and Windows 8 Mobile. Now all three platforms are technically in different program languages. The only 'reuse' that I can see is that of the boxes-and-lines drawings (UML :) charts and nothing else. So how do companies/programmers manage the variation of the same product across different platforms especially since the implementation languages differ? It's 'easier' in the desktop world IMO given the plethora of languages and cross-platform libraries to make your life easier. Not so in the mobile world. More so, product line management principles don't seem to be all that applicable - what is same and variant doesn't really matter - the application is the same (conceptually) and the implementation is variant. Some difficulties that come to mind: Bug Fixing: Applications maybe designed in a similar manner but the bug identification and fixing would be radically different. A bug on iOS may/may-not be existent for that on Android. Or a bug fix approach on one platform may not be the same on another (unless it's a semantic bug like a!=b instead of a==b which would require the same 'approach' to fixing in essence Enhancements: Making a change on one platform would be radically different than on another Code-Design Divergence: They way the code is written/organized, the class structures etc., could be very different given the different implementation environments - leading to further reuse of the (above) UML models. There are of course many others - just keeping the development in sync and making sure all applications are up to the same version with the same set of features etc. Seems the effort is 3x that of a single application. So how exactly does one manage this nightmarish situation? Some thoughts: Split application to client/server to minimize the effect to client side only (not always doable) Use frameworks like Unity-3D that could take care of the cross-platform problem (mostly applicable to games and probably not to other applications etc.) Any other ways of managing a platform line? What are some proven approaches to managing/taming the effects?

    Read the article

  • Why using Fragments?

    - by ahmed_khan_89
    I have read the documentation and some other questions' threads about this topic and I don't really feel convinced; I don't see clearly the limits of use of this technique. Fragments are now seen as a Best Practice; every Activity should be basically a support for one or more Fragments and not call a layout directly. Fragments are created in order to: allow the Activity to use many fragments, to change between them, to reuse these units... == the Fragment is totally dependent to the Context of an activity , so if I need something generic that I can reuse and handle in many Activities, I can create my own custom layouts or Views ... I will not care about this additional Complexity Developing Layer that fragments would add. a better handling to different resolution == OK for tablets/phones in case of long process that we can show two (or more) fragments in the same Activity in Tablets, and one by one in phones. But why would I use fragments always ? handling callbacks to navigate between Fragments (i.e: if the user is Logged-in I show a fragment else I show another fragment). === Just try to see how many bugs facebook SDK Log-in have because of this, to understand that it is really (?) ... considering that an Android Application is based on Activities... Adding another life cycles in the Activity would be better to design an Application... I mean the modules, the scenarios, the data management and the connectivity would be better designed, in that way. === This is an answer of someone who's used to see the Android SDK and Android Framework with a Fragments vision. I don't think it's wrong, but I am not sure it will give good results... And it is really abstract... ==== Why would I complicate my life, coding more, in using them always? else, why is it a best practice if it's just a tool for some cases? what are these cases?

    Read the article

  • Modularity through HTTP

    - by Michael Williamson
    As programmers, we strive for modularity in the code we write. We hope that splitting the problem up makes it easier to solve, and allows us to reuse parts of our code in other applications. Object-orientation is the most obvious of many attempts to get us closer to this ideal, and yet one of the most successful approaches is almost accidental: the web. Programming languages provide us with functions and classes, and plenty of other ways to modularize our code. This allows us to take our large problem, split it into small parts, and solve those small parts without having to worry about the whole. It also makes it easier to reason about our code. So far, so good, but now that we’ve written our small, independent module, for example to send out e-mails to my customers, we’d like to reuse it in another application. By creating DLLs, JARs or our platform’s package container of choice, we can do just that – provided our new application is on the same platform. Want to use a Java library from C#? Well, good luck – it might be possible, but it’s not going to be smooth sailing. Even if a library exists, it doesn’t mean that using it going to be a pleasant experience. Say I want to use Java to write out an XML document to an output stream. You’d imagine this would be a simple one-liner. You’d be wrong: import org.w3c.dom.*; import java.io.*; import javax.xml.transform.*; import javax.xml.transform.dom.*; import javax.xml.transform.stream.*; private static final void writeDoc(Document doc, OutputStream out) throws IOException { try { Transformer t = TransformerFactory.newInstance().newTransformer(); t.setOutputProperty(OutputKeys.DOCTYPE_SYSTEM, doc.getDoctype().getSystemId()); t.transform(new DOMSource(doc), new StreamResult(out)); } catch (TransformerException e) { throw new AssertionError(e); // Can't happen! } } Most of the time, there is a good chance somebody else has written the code before, but if nobody can understand the interface to that code, nobody’s going to use it. The result is that most of the code we write is just a variation on a theme. Despite our best efforts, we’ve fallen a little short of our ideal, but the web brings us closer. If we want to send e-mails to our customers, we could write an e-mail-sending library. More likely, we’d use an existing one for our language. Even then, we probably wouldn’t have niceties like A/B testing or DKIM signing. Alternatively, we could just fire some HTTP requests at MailChimp, and get a whole slew of features without getting anywhere near the code that implements them. The web is inherently language agnostic. So long as your language can send and receive text over HTTP, and probably parse some JSON, you’re about as well equipped as anybody. Instead of building libraries for a specific language, we can build a service that almost every language can reuse. The text-based nature of HTTP also helps to limit the complexity of the API. As SOAP will attest, you can still make a horrible mess using HTTP, but at least it is an obvious horrible mess. Complex data structures are tedious to marshal to and from text, providing a strong incentive to keep things simple. By contrast, spotting the complexities in a class hierarchy is often not as easy. HTTP doesn’t solve every problem. It probably isn’t such a good idea to use it inside an inner loop that’s executed thousands of times per second. What’s more, the HTTP approach might introduce some new problems. We often need to add a thin shim to each application that we wish to communicate over HTTP. For instance, we might need to write a small plugin in PHP if we want to integrate WordPress into our system. Suddenly, instead of a system written in one language, we’re maintaining a system with several distinct languages and platforms. Even then, we should strive to avoid re-implementing the same old thing. As programmers, we consistently underestimate both the cost of building a system and the ongoing maintenance. If we allow ourselves to integrate existing applications, even if they’re in unfamiliar languages, we save ourselves those development and maintenance costs, as well as being able to pick the best solution for our problem. Thanks to the web, HTTP is often the easiest way to get there.

    Read the article

  • How reusable should ViewModel classes be?

    - by stiank81
    I'm working on a WPF application, and I'm structuring it using the MVVM pattern. Initially I had an idea that the ViewModels should be reusable, but now I'm not too sure anymore. Should I be able to reuse my ViewModels if I need similar functionality for a WinForms application? Silverlight doesn't support all things WPF does - should I be able to reuse for Silverlight applications? What if I want to make a Linux GUI for my application. Then I need the ViewModel to build in Mono - is this something I should strive for? And so on.. So; should one write ViewModel classes with one specific View in mind, or think of reusability?

    Read the article

  • cant populate cells with an array when i have loaded a second UITableViewController

    - by richard Stephenson
    hi there, im very new to iphone programming, im creating my first app, (a world cup one) the first view is a table view. the cell text label is filled with an array, so it shows all the groups (group a, B, c,ect) then when you select a group, it pulls on another UITableViewcontroller, but whatever i do i cant set the text label of the cells (e.g france,mexico,south africa, etc. infact nothin i do to the cellForRowAtIndexPath makes a difference , could someone tell me what im doing wrong please Thanks `here is my code for the view controller #import "GroupADetailViewController.h" @implementation GroupADetailViewController @synthesize groupLabel = _groupLabel; @synthesize groupADetail = _groupADetail; @synthesize teamsInGroupA; #pragma mark Memory management - (void)dealloc { [_groupADetail release]; [_groupLabel release]; [super dealloc]; } #pragma mark View lifecycle - (void)viewDidLoad { [super viewDidLoad]; // Set the number label to show the number data teamsInGroupA = [[NSArray alloc]initWithObjects:@"France",@"Mexico",@"Uruguay",@"South Africa",nil]; NSLog(@"loaded"); // Set the title to also show the number data [[self navigationItem]setTitle:@"Group A"]; //[[self navigationItem]cell.textLabel.text:@"test"]; //[[self navigationItem] setTitle[NSString String } - (void)viewDidUnload { [self setgroupLabel:nil]; } #pragma mark Table view methods - (NSInteger)numberOfSectionsInTableView:(UITableView*)tableView { // Return the number of sections in the table view return 1; } - (NSInteger)tableView:(UITableView*)tableView numberOfRowsInSection:(NSInteger)section { // Return the number of rows in a specific section // Since we only have one section, just return the number of rows in the table return 4; NSLog:("count is %d",[teamsInGroupA count]); } - (UITableViewCell*)tableView:(UITableView*)tableView cellForRowAtIndexPath:(NSIndexPath*)indexPath { static NSString *cellIdentifier2 = @"Cell2"; // Reuse an existing cell if one is available for reuse UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:cellIdentifier2]; // If no cell was available, create a new one if (cell == nil) { NSLog(@"no cell, creating"); cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:cellIdentifier2] autorelease]; [cell setAccessoryType:UITableViewCellAccessoryDisclosureIndicator]; } NSLog(@"cell already there"); // Configure the cell to show the data for this row //[[cell textLabel]setText:[NSString string //[[cell textLabel]setText:[teamsInGroupA objectAtIndex:indexPath.row]]; //NSUInteger row = [indexPath row]; //[cell setText:[[teamsInGroupA objectAtIndex:indexPath:row]retain]]; //cell.textLabel.text:@"Test" [[cell textLabel]setText:[teamsInGroupA objectAtIndex:indexPath.row]]; return cell; } @end #import "GroupADetailViewController.h" @implementation GroupADetailViewController @synthesize groupLabel = _groupLabel; @synthesize groupADetail = _groupADetail; @synthesize teamsInGroupA; #pragma mark Memory management - (void)dealloc { [_groupADetail release]; [_groupLabel release]; [super dealloc]; } #pragma mark View lifecycle - (void)viewDidLoad { [super viewDidLoad]; // Set the number label to show the number data teamsInGroupA = [[NSArray alloc]initWithObjects:@"France",@"Mexico",@"Uruguay",@"South Africa",nil]; NSLog(@"loaded"); // Set the title to also show the number data [[self navigationItem]setTitle:@"Group A"]; //[[self navigationItem]cell.textLabel.text:@"test"]; //[[self navigationItem] setTitle[NSString String } - (void)viewDidUnload { [self setgroupLabel:nil]; } #pragma mark Table view methods - (NSInteger)numberOfSectionsInTableView:(UITableView*)tableView { // Return the number of sections in the table view return 1; } - (NSInteger)tableView:(UITableView*)tableView numberOfRowsInSection:(NSInteger)section { // Return the number of rows in a specific section // Since we only have one section, just return the number of rows in the table return 4; NSLog:("count is %d",[teamsInGroupA count]); } - (UITableViewCell*)tableView:(UITableView*)tableView cellForRowAtIndexPath:(NSIndexPath*)indexPath { static NSString *cellIdentifier2 = @"Cell2"; // Reuse an existing cell if one is available for reuse UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:cellIdentifier2]; // If no cell was available, create a new one if (cell == nil) { NSLog(@"no cell, creating"); cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:cellIdentifier2] autorelease]; [cell setAccessoryType:UITableViewCellAccessoryDisclosureIndicator]; } NSLog(@"cell already there"); // Configure the cell to show the data for this row //[[cell textLabel]setText:[NSString string //[[cell textLabel]setText:[teamsInGroupA objectAtIndex:indexPath.row]]; //NSUInteger row = [indexPath row]; //[cell setText:[[teamsInGroupA objectAtIndex:indexPath:row]retain]]; //cell.textLabel.text:@"Test" [[cell textLabel]setText:[teamsInGroupA objectAtIndex:indexPath.row]]; return cell; } @end

    Read the article

  • C, Linux: Receiving data from multiple multicast address on same port - how to distinguish them?

    - by Gigi
    2 vote down star I have an application that is receiving data from multiple multicast sources on the same port. I am able to receive the data. However, I am trying to account for statistics of each group (i.e. msgs received, bytes received) and all the data is getting mixed up. Does anyone know how to solved this problem? If I try to look at the sender's address, it is not the multicast address, but rather the IP of the sending machine. I am using the following socket options: struct ip_mreq mreq; mreq.imr_multiaddr.s_addr = inet_addr("224.1.2.3"); mreq.imr_interface.s_addr = INADDR_ANY; setsockopt(s, IPPROTO_IP, IP_ADD_MEMBERSHIP, &mreq, sizeof(mreq)); and also: setsockopt(s, SOL_SOCKET, SO_REUSEPORT, &reuse, sizeof(reuse)); I appreciate any help!!!

    Read the article

  • Problems with XAML WPF 4.0 Editor in VS2010

    - by RTPeat
    Wondering if anybody else has found some very odd behaviour with the XAML/WPF 4 editor in VS2010. This only occurs if the project is using .NET 4. Whenever I tried to open a XAML document for editing, the window would appear to open for a split second and then vanish, but VS2010 would still list the window as open. The fault was eventually traced to having the "Reuse current document window, if saved" option under "Documents" in the "Environment" options checked. Once this was unchecked XAML 4 files opened as expected. As I said, this only appears to occur on projects targeted at .NET Framework 4 - those targeted at 3.5 worked without a problem, and the "Reuse current document window, if saved" appears to work fine on other files.

    Read the article

  • Need some constructive criticism on my SSE/Assembly attempt

    - by Brett
    Hello, I'm working on converting a bit of code to SSE, and while I have the correct output it turns out to be slower than standard c++ code. The bit of code that I need to do this for is: float ox = p2x - (px * c - py * s)*m; float oy = p2y - (px * s - py * c)*m; What I've got for SSE code is: void assemblycalc(vector4 &p, vector4 &sc, float &m, vector4 &xy) { vector4 r; __m128 scale = _mm_set1_ps(m); __asm { mov eax, p //Load into CPU reg mov ebx, sc movups xmm0, [eax] //move vectors to SSE regs movups xmm1, [ebx] mulps xmm0, xmm1 //Multiply the Elements movaps xmm2, xmm0 //make a copy of the array shufps xmm2, xmm0, 0x1B //shuffle the array subps xmm0, xmm2 //subtract the elements mulps xmm0, scale //multiply the vector by the scale mov ecx, xy //load the variable into cpu reg movups xmm3, [ecx] //move the vector to the SSE regs subps xmm3, xmm0 //subtract xmm3 - xmm0 movups [r], xmm3 //Save the retun vector, and use elements 0 and 3 } } Since its very difficult to read the code, I'll explain what I did: loaded vector4 , xmm0 _ p = [px , py , px , py ] mult. by vector4, xmm1 _ cs = [c , c , s , s ] _____________mult---------------------------- result,______ xmm0 = [px*c, py*c, px*s, py*s] reuse result, xmm0 = [px*c, py*c, px*s, py*s] shuffle result, xmm2 = [py*s, px*s, py*c, px*c] ___________subtract---------------------------- result, xmm0 = [px*c-py*s, py*c-px*s, px*s-py*c, py*s-px*c] reuse result, xmm0 = [px*c-py*s, py*c-px*s, px*s-py*c, py*s-px*c] load m vector4, scale = [m, m, m, m] ______________mult---------------------------- result, xmm0 = [(px*c-py*s)*m, (py*c-px*s)*m, (px*s-py*c)*m, (py*s-px*c)*m] load xy vector4, xmm3 = [p2x, p2x, p2y, p2y] reuse, xmm0 = [(px*c-py*s)*m, (py*c-px*s)*m, (px*s-py*c)*m, (py*s-px*c)*m] ___________subtract---------------------------- result, xmm3 = [p2x-(px*c-py*s)*m, p2x-(py*c-px*s)*m, p2y-(px*s-py*c)*m, p2y-(py*s-px*c)*m] then ox = xmm3[0] and oy = xmm3[3], so I essentially don't use xmm3[1] or xmm3[4] I apologize for the difficulty reading this, but I'm hoping someone might be able to provide some guidance for me, as the standard c++ code runs in 0.001444ms and the SSE code runs in 0.00198ms. Let me know if there is anything I can do to further explain/clean this up a bit. The reason I'm trying to use SSE is because I run this calculation millions of times, and it is a part of what is slowing down my current code. Thanks in advance for any help! Brett

    Read the article

  • Receving multiple multicast feeds on the same port - C, Linux

    - by Gigi
    I have an application that is receiving data from multiple multicast sources on the same port. I am able to receive the data. However, I am trying to account for statistics of each group (i.e. msgs received, bytes received) and all the data is getting mixed up. Does anyone know how to solved this problem? If I try to look at the sender's address, it is not the multicast address, but rather the IP of the sending machine. I am using the following socket options: struct ip_mreq mreq; mreq.imr_multiaddr.s_addr = inet_addr("224.1.2.3"); mreq.imr_interface.s_addr = INADDR_ANY; setsockopt(s, IPPROTO_IP, IP_ADD_MEMBERSHIP, &mreq, sizeof(mreq)); and also: setsockopt(s, SOL_SOCKET, SO_REUSEPORT, &reuse, sizeof(reuse)); I appreciate any help!!!

    Read the article

  • WCF service reference stopped generating code for one project

    - by Mike Pateras
    I have references to two different WCF services in a project. I updated the reference for one of the services, and now no code is generated for it. The references.cs file just has the "this is genrated code" comment at the top. Updating that same service in other projects and updating the other service both work fine. It's only that one service reference in this one project that's causing the problem, and I'm getting no information from Visual Studio (it just says it failed to generate code and I should look at the other errors, which provide no information). If I uncheck the "reuse types in referenced assemblies", code is generated, but I don't want to have this one project be different from the others. I'd like to solve the problem. Re-checking the reuse type option produces an empty references.cs file, again. The collection type doesn't seem to matter, either. How can I diagnose and solve this problem?

    Read the article

  • Ask StackOverFlow : Canny a LightWeight Authorization library in Java

    - by eltados
    In the course of my work i need to develop an authorization engine ( i'm already authenticated and i check access of a user to an action ) in order to store all the authorization logic inside a same place and be able to reuse it and i have created the mini library. http://github.com/eltados/canny (updated) what do you think about it? What are the limits of my approch ? Do you understand the benefit or it? Is there any lightweight Authorization engine library i could have a look at? I had a look at spring security and it does not really answer my requirement. The main idea is that i want to be able to reuse the same code to controll access in the controllers and the views.

    Read the article

  • Rails 3: How do I call a javascript function from a js.erb file

    - by user321775
    Now that I've upgraded to Rails 3, I'm trying to figure out the proper way to separate and reuse pieces of javascript. Here's the scenario I'm dealing with: I have a page with two areas: one with elements that should be draggable, the other with droppables. When the page loads I use jQuery to setup the draggables and droppables. Currently I have the script in the head portion of application.html.erb, which I'm sure is not the right solution but at least works. When I press a button on the page, an ajax call is made to my controller that replaces the draggables with a new set of elements that should also be draggable. I have a js.erb file that renders a partial in the correct location. After rendering I need to make the new elements draggable, so I'd like to reuse the code that currently lives in application.html.erb, but I haven't found the right way to do it. I can only make the new elements draggable by pasting the code directly into my js.erb file (yuck). What I'd like to have: - a javascript file that contains the functions prepdraggables() and prepdroppables() - a way to call either function from application.html.erb or from a js.erb file I've tried using :content_for to store and reuse the code, but can't seem to get it working correctly. What I currently have in the head section of application.html.erb <% content_for :drag_drop_prep do %> <script type="text/javascript" charset="utf-8"> $(document).ready(function () { // declare all DOM elements with class draggable to be draggable $( ".draggable" ).draggable( { revert : 'invalid' }); // declare all DOM elements with class legal to be droppable $(".legal").droppable({ hoverClass : 'legal_hover', drop : function(event, ui) { var c = new Object(); c['die'] = ui.draggable.attr("id"); c['cell'] = $(this).attr("id"); c['authenticity_token'] = encodeURIComponent(window._token); $.ajax({ type: "POST", url: "/placeDie", data: c, timeout: 5000 }); }}); }); </script> <% end %> undo.js.erb $("#board").html("<%= escape_javascript(render :partial => 'shared/board', :locals => { :playable => true, :restartable => !session[:challenge]}) %>") // This is where I want to prepare draggables. <%= javascript_include_tag "customdragdrop.js" %> // assuming this file had the draggables code from above in a prepdraggables() function prepdraggables();

    Read the article

  • bash: How to evaluate PS1, PS2, ...?

    - by Harry
    Is there any way to 'evaluate' PS1, PS2, etc from within a bash script? Although, I can use alternate means to get all elements of my current PS1, I would really like to be able to reuse its definition instead of using these alternate means. For example, ===================================== PS1 element --> Alternate means ===================================== \u --> $USER \h --> $HOSTNAME \w --> $PWD ... ===================================== I could very well use the 'alternate means' column in my script, but I don't want to. In my PS1, I, for example, use bold blue color via terminal escape sequences which I'd like to be able to simply reuse by evaluating PS1.

    Read the article

  • Writing Great Software

    - by 01010011
    Hi, I'm currently reading Head First's Object Oriented Analysis and Design. The book states that to write great software (i.e. software that is well-designed, well-coded, easy to maintain, reuse, and extend) you need to do three things: Firstly, make sure the software does everything the customer wants it to do Once step 1 is completed, apply Object Oriented principles and techniques to eliminate any duplicate code that might have slipped in Once steps 1 and 2 are complete, then apply design patterns to make sure the software is maintainable and reusable for years to come. My question is, do you follow these steps when developing great software? If not, what steps do you usually follow inorder to ensure it's well designed, well-coded, easy to maintain, reuse and extend?

    Read the article

  • In web project can we write core services layer without knowledge of UI ?

    - by Silent Warrior
    I am working on web project. We are using flex as UI layer. My question is often we are writing core service layer separately from web/UI layer so we can reuse same services for different UI layer/technology. So practically is it possible to reuse same core layer services without any changes/addition in API with different kind of UI technologies/layers. For e.g. same core service layer with UI technology which supports synchronized request response (e.g. jsp etc.) and non synchronize or event driven UI technology (e.g Ajax, Flex, GWT etc.) or with multiple devices like (computers, mobiles, pdas etc.). Personally I feel its very tough to write core service layer without any knowledge of UI. Looking for thoughts from other people.

    Read the article

  • How To Use Regular Expressions for Data Validation and Cleanup

    You need to provide data validation at the server level for complex strings like phone numbers, email addresses, etc. You may also need to do data cleanup / standardization before moving it from source to target. Although SQL Server provides a fair number of string functions, the code developed with these built-in functions can become complex and hard to maintain or reuse. The Future of SQL Server Monitoring "Being web-based, SQL Monitor 2.0 enables you to check on your servers from almost any location" Jonathan Allen.Try SQL Monitor now.

    Read the article

  • Utilizing Generics to make a Class structure more mutable…

    - by Keith Barrows
    While the ASP.NET GridView control supports automatic paging I found it faster to use custom paging in several situations.  I found myself rewriting the same code over and over just to add the basic sorting capabilities to an ASP.NET GridView object.  So today I took just a little bit of time to encapsulate it all into a Class I can use and reuse on any page with a GridView.  In fact, it will probably take longer to write this blog entry than it took to encapsulate the functionality...(read more)

    Read the article

  • C# Performance Pitfall – Interop Scenarios Change the Rules

    - by Reed
    C# and .NET, overall, really do have fantastic performance in my opinion.  That being said, the performance characteristics dramatically differ from native programming, and take some relearning if you’re used to doing performance optimization in most other languages, especially C, C++, and similar.  However, there are times when revisiting tricks learned in native code play a critical role in performance optimization in C#. I recently ran across a nasty scenario that illustrated to me how dangerous following any fixed rules for optimization can be… The rules in C# when optimizing code are very different than C or C++.  Often, they’re exactly backwards.  For example, in C and C++, lifting a variable out of loops in order to avoid memory allocations often can have huge advantages.  If some function within a call graph is allocating memory dynamically, and that gets called in a loop, it can dramatically slow down a routine. This can be a tricky bottleneck to track down, even with a profiler.  Looking at the memory allocation graph is usually the key for spotting this routine, as it’s often “hidden” deep in call graph.  For example, while optimizing some of my scientific routines, I ran into a situation where I had a loop similar to: for (i=0; i<numberToProcess; ++i) { // Do some work ProcessElement(element[i]); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This loop was at a fairly high level in the call graph, and often could take many hours to complete, depending on the input data.  As such, any performance optimization we could achieve would be greatly appreciated by our users. After a fair bit of profiling, I noticed that a couple of function calls down the call graph (inside of ProcessElement), there was some code that effectively was doing: // Allocate some data required DataStructure* data = new DataStructure(num); // Call into a subroutine that passed around and manipulated this data highly CallSubroutine(data); // Read and use some values from here double values = data->Foo; // Cleanup delete data; // ... return bar; Normally, if “DataStructure” was a simple data type, I could just allocate it on the stack.  However, it’s constructor, internally, allocated it’s own memory using new, so this wouldn’t eliminate the problem.  In this case, however, I could change the call signatures to allow the pointer to the data structure to be passed into ProcessElement and through the call graph, allowing the inner routine to reuse the same “data” memory instead of allocating.  At the highest level, my code effectively changed to something like: DataStructure* data = new DataStructure(numberToProcess); for (i=0; i<numberToProcess; ++i) { // Do some work ProcessElement(element[i], data); } delete data; Granted, this dramatically reduced the maintainability of the code, so it wasn’t something I wanted to do unless there was a significant benefit.  In this case, after profiling the new version, I found that it increased the overall performance dramatically – my main test case went from 35 minutes runtime down to 21 minutes.  This was such a significant improvement, I felt it was worth the reduction in maintainability. In C and C++, it’s generally a good idea (for performance) to: Reduce the number of memory allocations as much as possible, Use fewer, larger memory allocations instead of many smaller ones, and Allocate as high up the call stack as possible, and reuse memory I’ve seen many people try to make similar optimizations in C# code.  For good or bad, this is typically not a good idea.  The garbage collector in .NET completely changes the rules here. In C#, reallocating memory in a loop is not always a bad idea.  In this scenario, for example, I may have been much better off leaving the original code alone.  The reason for this is the garbage collector.  The GC in .NET is incredibly effective, and leaving the allocation deep inside the call stack has some huge advantages.  First and foremost, it tends to make the code more maintainable – passing around object references tends to couple the methods together more than necessary, and overall increase the complexity of the code.  This is something that should be avoided unless there is a significant reason.  Second, (unlike C and C++) memory allocation of a single object in C# is normally cheap and fast.  Finally, and most critically, there is a large advantage to having short lived objects.  If you lift a variable out of the loop and reuse the memory, its much more likely that object will get promoted to Gen1 (or worse, Gen2).  This can cause expensive compaction operations to be required, and also lead to (at least temporary) memory fragmentation as well as more costly collections later. As such, I’ve found that it’s often (though not always) faster to leave memory allocations where you’d naturally place them – deep inside of the call graph, inside of the loops.  This causes the objects to stay very short lived, which in turn increases the efficiency of the garbage collector, and can dramatically improve the overall performance of the routine as a whole. In C#, I tend to: Keep variable declarations in the tightest scope possible Declare and allocate objects at usage While this tends to cause some of the same goals (reducing unnecessary allocations, etc), the goal here is a bit different – it’s about keeping the objects rooted for as little time as possible in order to (attempt) to keep them completely in Gen0, or worst case, Gen1.  It also has the huge advantage of keeping the code very maintainable – objects are used and “released” as soon as possible, which keeps the code very clean.  It does, however, often have the side effect of causing more allocations to occur, but keeping the objects rooted for a much shorter time. Now – nowhere here am I suggesting that these rules are hard, fast rules that are always true.  That being said, my time spent optimizing over the years encourages me to naturally write code that follows the above guidelines, then profile and adjust as necessary.  In my current project, however, I ran across one of those nasty little pitfalls that’s something to keep in mind – interop changes the rules. In this case, I was dealing with an API that, internally, used some COM objects.  In this case, these COM objects were leading to native allocations (most likely C++) occurring in a loop deep in my call graph.  Even though I was writing nice, clean managed code, the normal managed code rules for performance no longer apply.  After profiling to find the bottleneck in my code, I realized that my inner loop, a innocuous looking block of C# code, was effectively causing a set of native memory allocations in every iteration.  This required going back to a “native programming” mindset for optimization.  Lifting these variables and reusing them took a 1:10 routine down to 0:20 – again, a very worthwhile improvement. Overall, the lessons here are: Always profile if you suspect a performance problem – don’t assume any rule is correct, or any code is efficient just because it looks like it should be Remember to check memory allocations when profiling, not just CPU cycles Interop scenarios often cause managed code to act very differently than “normal” managed code. Native code can be hidden very cleverly inside of managed wrappers

    Read the article

  • How much freedom should a programmer have in choosing a language and framework?

    - by Spencer
    I started working at a company that is primarily a C# oriented. We have a few people who like Java and JRuby, but a majority of programmers here like C#. I was hired because I have a lot of experience building web applications and because I lean towards newer technologies like JRuby on Rails or nodejs. I have recently started on a project building a web application with a focus on getting a lot of stuff done in a short amount of time. The software lead has dictated that I use mvc4 instead of rails. That might be OK, except I don't know mvc4, I don't know C# and I am the only one responsible for creating the web application server and front-end UI. Wouldn't it make sense to use a framework that I already know extremely well (Rails) instead of using mvc4? The two reasons behind the decision was that the tech lead doesn't know Jruby/rails and there would be no way to reuse the code. Counter arguments: He won't be contributing to the code and is frankly, not needed on this project. So, it doesn't really matter if he knows JRuby/rails or not. We actually can reuse the code since we have a lot of java apps that JRuby can pull code from and vice-versa. In fact, he has dedicated some resources to convert a Java library to C#, instead of just running the Java library on the JRuby on Rails app. All because he doesn't like Java or JRuby I have built many web applications, but using something unfamiliar is causing some spin-up and I am unable to build an awesome application in as short of a time that I'm used to. This would be fine, learning new technologies is important in this field. The problem is, for this project, we need to get a lot done in a short period of time. At what point should a developer be allowed to choose his tools? Is this dependent on the company? Does my company suck or is this considered normal? Do greener pastures exist? Am I looking at this the wrong way? Bonus: Should I just keep my head down and move along at a snails pace, or defy orders and go with what I know in order to make this project more successful? Edit: I had actually created a fully function rails application (on my own time) and showed it to the team and it did not seem to matter. I am currently porting it to mvc4 (slowly).

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >