Search Results

Search found 21702 results on 869 pages for 'large objects'.

Page 265/869 | < Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >

  • Help for choosing a cost effective game server for Flash client

    - by Sapots Thomas
    I am developing a flash-based game primarily for desktops, to be hosted on facebook platform (like cityville, sims social etc). The gameplay doesn't involve real-time communication between players unlike an mmorpg. Here each player plays in his own world without any knowledge of other online players. I've written almost 95% of the game logic in actionscript on the client side. I used Smartfox Server pro on the server side (mostly used for getting data from the DB) and the entire server code is an extension written in java. I'm using json as the protocol for communication. Although I love smartfox server, as an indie, its tough for me to afford the unlimited users license. Morever its limited just to one machine. So I'm looking for an alternative to smartfox server now. The reason for choosing smartfox server earlier was to use the server properties supported by it. Server properties on smartfox server take advantage of the socket connection and are essentially server side objects in java which store some data for the player which he can change frequently during the game. And when he logs out of the game, the extension can write out the final state in the DB (I'm using MySQL). This significantly reduces the number of DB UPDATE/INSERT calls made during the game. I love the way this works since the data is secure as its on the server side and smartfox server is known to be scalable. (although I'm not sure whether this approach is used widely by gaming industry or not, since this is not an mmorpg, I'm putting all player in the lobby). So my question is whether any of the free and community supported servers like reddwarf, firebase, BlazeDS etc can provide a similar architecture so that I can use server properties without many code changes? EDIT : I am not insisting on the exact same feature (thats asking too much!), but atleast a viable messaging system on the server so that I can send actionscript objects from the client using json/binary so that its fast. OR maybe some completely different way to implement what I need here. Thanks in advance.

    Read the article

  • SSAS DMVs: useful links

    - by Davide Mauri
    From time to time happens that I need to extract metadata informations from Analysis Services DMVS in order to quickly get an overview of the entire situation and/or drill down to detail level. As a memo I post the link I use most when need to get documentation on SSAS Objects Data DMVs: SSAS: Using DMV Queries to get Cube Metadata http://bennyaustin.wordpress.com/2011/03/01/ssas-dmv-queries-cube-metadata/ SSAS DMV (Dynamic Management View) http://dwbi1.wordpress.com/2010/01/01/ssas-dmv-dynamic-management-view/ Use Dynamic Management Views (DMVs) to Monitor Analysis Services http://msdn.microsoft.com/en-us/library/hh230820.aspx

    Read the article

  • Local SEO Tools For SME's

    For many large corporations the focus of their SEO strategy will be on a national scale. But often for small and medium-sized enterprises a more local view should be taken to maximise visibility with their target audience, and ensure they are reaching their potential client base on a regular basis. Alongside the traditional search engine optimisation tactics there are a number of tools that can be incorporated within local strategies.

    Read the article

  • What strategy to use when starting in a new project with no documentation?

    - by Amir Rezaei
    Which is the best why to go when there are no documentation? For example how do you learn business rules? I have done the following steps: Since we are using a ORM tool I have printed a copy of database schema where I can see relations between objects. I have made a list of short names/table names that I will get explained. The project is client/server enterprise application using MVVM pattern.

    Read the article

  • Overwrite previous output in Bash instead of appending it

    - by NES
    For a bash timer i use this code: #!/bin/bash sek=60 echo "60 Seconds Wait!" echo -n "One Moment please " while [ $sek -ge 1 ] do echo -n "$sek " sleep 1 sek=$[$sek-1] done echo echo "ready!" That gives me something like that One Moment please: 60 59 58 57 56 55 ... Is there a possibility to replace the last value of second by the most recent so that the output doesn't generate a large trail but the seconds countdown like a real time at one position? (Hope you understand what i mean :))

    Read the article

  • Given an XML which contains a representation of a graph, how to apply it DFS algorithm? [on hold]

    - by winston smith
    Given the followin XML which is a directed graph: <?xml version="1.0" encoding="iso-8859-1" ?> <!DOCTYPE graph PUBLIC "-//FC//DTD red//EN" "../dtd/graph.dtd"> <graph direct="1"> <vertex label="V0"/> <vertex label="V1"/> <vertex label="V2"/> <vertex label="V3"/> <vertex label="V4"/> <vertex label="V5"/> <edge source="V0" target="V1" weight="1"/> <edge source="V0" target="V4" weight="1"/> <edge source="V5" target="V2" weight="1"/> <edge source="V5" target="V4" weight="1"/> <edge source="V1" target="V2" weight="1"/> <edge source="V1" target="V3" weight="1"/> <edge source="V1" target="V4" weight="1"/> <edge source="V2" target="V3" weight="1"/> </graph> With this classes i parsed the graph and give it an adjacency list representation: import java.io.IOException; import java.util.HashSet; import java.util.LinkedList; import java.util.Collection; import java.util.Iterator; import java.util.logging.Level; import java.util.logging.Logger; import practica3.util.Disc; public class ParsingXML { public static void main(String[] args) { try { // TODO code application logic here Collection<Vertex> sources = new HashSet<Vertex>(); LinkedList<String> lines = Disc.readFile("xml/directed.xml"); for (String lin : lines) { int i = Disc.find(lin, "source=\""); String data = ""; if (i > 0 && i < lin.length()) { while (lin.charAt(i + 1) != '"') { data += lin.charAt(i + 1); i++; } Vertex v = new Vertex(); v.setName(data); v.setAdy(new HashSet<Vertex>()); sources.add(v); } } Iterator it = sources.iterator(); while (it.hasNext()) { Vertex ver = (Vertex) it.next(); Collection<Vertex> adyacencias = ver.getAdy(); LinkedList<String> ls = Disc.readFile("xml/graphs.xml"); for (String lin : ls) { int i = Disc.find(lin, "target=\""); String data = ""; if (lin.contains("source=\""+ver.getName())) { Vertex v = new Vertex(); if (i > 0 && i < lin.length()) { while (lin.charAt(i + 1) != '"') { data += lin.charAt(i + 1); i++; } v.setName(data); } i = Disc.find(lin, "weight=\""); data = ""; if (i > 0 && i < lin.length()) { while (lin.charAt(i + 1) != '"') { data += lin.charAt(i + 1); i++; } v.setWeight(Integer.parseInt(data)); } if (v.getName() != null) { adyacencias.add(v); } } } } for (Vertex vert : sources) { System.out.println(vert); System.out.println("adyacencias: " + vert.getAdy()); } } catch (IOException ex) { Logger.getLogger(ParsingXML.class.getName()).log(Level.SEVERE, null, ex); } } } This is another class: import java.util.Collection; import java.util.Objects; public class Vertex { private String name; private int weight; private Collection ady; public Collection getAdy() { return ady; } public void setAdy(Collection adyacencias) { this.ady = adyacencias; } public String getName() { return name; } public void setName(String nombre) { this.name = nombre; } public int getWeight() { return weight; } public void setWeight(int weight) { this.weight = weight; } @Override public int hashCode() { int hash = 7; hash = 43 * hash + Objects.hashCode(this.name); hash = 43 * hash + this.weight; return hash; } @Override public boolean equals(Object obj) { if (obj == null) { return false; } if (getClass() != obj.getClass()) { return false; } final Vertex other = (Vertex) obj; if (!Objects.equals(this.name, other.name)) { return false; } if (this.weight != other.weight) { return false; } return true; } @Override public String toString() { return "Vertice{" + "name=" + name + ", weight=" + weight + '}'; } } And finally: /** * * @author user */ /* -*-jde-*- */ /* <Disc.java> Contains the main argument*/ import java.io.*; import java.util.LinkedList; /** * Lectura y escritura de archivos en listas de cadenas * Ideal para el uso de las clases para gráficas. * * @author Peralta Santa Anna Victor Miguel * @since Julio 2011 */ public class Disc { /** * Metodo para lectura de un archivo * * @param fileName archivo que se va a leer * @return El archivo en representacion de lista de cadenas */ public static LinkedList<String> readFile(String fileName) throws IOException { BufferedReader file = new BufferedReader(new FileReader(fileName)); LinkedList<String> textlist = new LinkedList<String>(); while (file.ready()) { textlist.add(file.readLine().trim()); } file.close(); /* for(String linea:textlist){ if(linea.contains("source")){ //String generado = linea.replaceAll("<\\w+\\s+\"", ""); //System.out.println(generado); } }*/ return textlist; }//readFile public static int find(String linea,String palabra){ int i,j; boolean found = false; for(i=0,j=0;i<linea.length();i++){ if(linea.charAt(i)==palabra.charAt(j)){ j++; if(j==palabra.length()){ found = true; return i; } }else{ continue; } } if(!found){ i= -1; } return i; } /** * Metodo para la escritura de un archivo * * @param fileName archivo que se va a escribir * @param tofile la lista de cadenas que quedaran en el archivo * @param append el bit que dira si se anexa el contenido o se empieza de cero */ public static void writeFile(String fileName, LinkedList<String> tofile, boolean append) throws IOException { FileWriter file = new FileWriter(fileName, append); for (int i = 0; i < tofile.size(); i++) { file.write(tofile.get(i) + "\n"); } file.close(); }//writeFile /** * Metodo para escritura de un archivo * @param msg archivo que se va a escribir * @param tofile la cadena que quedaran en el archivo * @param append el bit que dira si se anexa el contenido o se empieza de cero */ public static void writeFile(String msg, String tofile, boolean append) throws IOException { FileWriter file = new FileWriter(msg, append); file.write(tofile); file.close(); }//writeFile }// I'm stuck on what can be the best way to given an adjacency list representation of the graph how to apply it Depth-first search algorithm. Any idea of how to aproach to complete the task?

    Read the article

  • Custom Lookup Provider For NetBeans Platform CRUD Tutorial

    - by Geertjan
    For a long time I've been planning to rewrite the second part of the NetBeans Platform CRUD Application Tutorial to integrate the loosely coupled capabilities introduced in a seperate series of articles based on articles by Antonio Vieiro (a great series, by the way). Nothing like getting into the Lookup stuff right from the get go (rather than as an afterthought)! The question, of course, is how to integrate the loosely coupled capabilities in a logical way within that tutorial. Today I worked through the tutorial from scratch, up until the point where the prototype is completed, i.e., there's a JTextArea displaying data pulled from a database. That brought me to the place where I needed to be. In fact, as soon as the prototype is completed, i.e., the database connection has been shown to work, the whole story about Lookup.Provider and InstanceContent should be introduced, so that all the subsequent sections, i.e., everything within "Integrating CRUD Functionality" will be done by adding new capabilities to the Lookup.Provider. However, before I perform open heart surgery on that tutorial, I'd like to run the scenario by all those reading this blog who understand what I'm trying to do! (I.e., probably anyone who has read this far into this blog entry.) So, this is what I propose should happen and in this order: Point out the fact that right now the database access code is found directly within our TopComponent. Not good. Because you're mixing view code with data code and, ideally, the developers creating the user interface wouldn't need to know anything about the data access layer. Better to separate out the data access code into a separate class, within the CustomerLibrary module, i.e., far away from the module providing the user interface, with this content: public class CustomerDataAccess { public List<Customer> getAllCustomers() { return Persistence.createEntityManagerFactory("CustomerLibraryPU"). createEntityManager().createNamedQuery("Customer.findAll").getResultList(); } } Point out the fact that there is a concept of "Lookup" (which readers of the tutorial should know about since they should have followed the NetBeans Platform Quick Start), which is a registry into which objects can be published and to which other objects can be listening. In the same way as a TopComponent provides a Lookup, as demonstrated in the NetBeans Platform Quick Start, your own object can also provide a Lookup. So, therefore, let's provide a Lookup for Customer objects.  import org.openide.util.Lookup; import org.openide.util.lookup.AbstractLookup; import org.openide.util.lookup.InstanceContent; public class CustomerLookupProvider implements Lookup.Provider { private Lookup lookup; private InstanceContent instanceContent; public CustomerLookupProvider() { // Create an InstanceContent to hold capabilities... instanceContent = new InstanceContent(); // Create an AbstractLookup to expose the InstanceContent... lookup = new AbstractLookup(instanceContent); // Add a "Read" capability to the Lookup of the provider: //...to come... // Add a "Update" capability to the Lookup of the provider: //...to come... // Add a "Create" capability to the Lookup of the provider: //...to come... // Add a "Delete" capability to the Lookup of the provider: //...to come... } @Override public Lookup getLookup() { return lookup; } } Point out the fact that, in the same way as we can publish an object into the Lookup of a TopComponent, we can now also publish an object into the Lookup of our CustomerLookupProvider. Instead of publishing a String, as in the NetBeans Platform Quick Start, we'll publish an instance of our own type. And here is the type: public interface ReadCapability { public void read() throws Exception; } And here is an implementation of our type added to our Lookup: public class CustomerLookupProvider implements Lookup.Provider { private Set<Customer> customerSet; private Lookup lookup; private InstanceContent instanceContent; public CustomerLookupProvider() { customerSet = new HashSet<Customer>(); // Create an InstanceContent to hold capabilities... instanceContent = new InstanceContent(); // Create an AbstractLookup to expose the InstanceContent... lookup = new AbstractLookup(instanceContent); // Add a "Read" capability to the Lookup of the provider: instanceContent.add(new ReadCapability() { @Override public void read() throws Exception { ProgressHandle handle = ProgressHandleFactory.createHandle("Loading..."); handle.start(); customerSet.addAll(new CustomerDataAccess().getAllCustomers()); handle.finish(); } }); // Add a "Update" capability to the Lookup of the provider: //...to come... // Add a "Create" capability to the Lookup of the provider: //...to come... // Add a "Delete" capability to the Lookup of the provider: //...to come... } @Override public Lookup getLookup() { return lookup; } public Set<Customer> getCustomers() { return customerSet; } } Point out that we can now create a new instance of our Lookup (in some other module, so long as it has a dependency on the module providing the CustomerLookupProvider and the ReadCapability), retrieve the ReadCapability, and then do something with the customers that are returned, here in the rewritten constructor of the TopComponent, without needing to know anything about how the database access is actually achieved since that is hidden in the implementation of our type, above: public CustomerViewerTopComponent() { initComponents(); setName(Bundle.CTL_CustomerViewerTopComponent()); setToolTipText(Bundle.HINT_CustomerViewerTopComponent()); // EntityManager entityManager = Persistence.createEntityManagerFactory("CustomerLibraryPU").createEntityManager(); // Query query = entityManager.createNamedQuery("Customer.findAll"); // List<Customer> resultList = query.getResultList(); // for (Customer c : resultList) { // jTextArea1.append(c.getName() + " (" + c.getCity() + ")" + "\n"); // } CustomerLookupProvider lookup = new CustomerLookupProvider(); ReadCapability rc = lookup.getLookup().lookup(ReadCapability.class); try { rc.read(); for (Customer c : lookup.getCustomers()) { jTextArea1.append(c.getName() + " (" + c.getCity() + ")" + "\n"); } } catch (Exception ex) { Exceptions.printStackTrace(ex); } } Does the above make as much sense to others as it does to me, including the naming of the classes? Feedback would be appreciated! Then I'll integrate into the tutorial and do the same for the other sections, i.e., "Create", "Update", and "Delete". (By the way, of course, the tutorial ends up showing that, rather than using a JTextArea to display data, you can use Nodes and explorer views to do so.)

    Read the article

  • How do your busiest people transfer their knowledge?

    - by Wikis Commit At Area 51
    We have recently polled our company wide wiki users and found out that there are two large groups of users: people with lots of knowledge but (who claim they have) no time to document people with time but (who claim they have) not enough knowledge worth documenting Each group covered almost 50% of the users! How do your companies handle this? That is, how do you encourage your busiest / most knowledgeable people to share their knowledge?

    Read the article

  • ANTS Memory Profiler 7.0

    - by James Michael Hare
    I had always been a fan of ANTS products (Reflector is absolutely invaluable, and their performance profiler is great as well – very easy to use!), so I was curious to see what the ANTS Memory Profiler could show me. Background While a performance profiler will track how much time is typically spent in each unit of code, a memory profiler gives you much more detail on how and where your memory is being consumed and released in a program. As an example, I’d been working on a data access layer at work to call a market data web service.  This web service would take a list of symbols to quote and would return back the quote data.  To help consolidate the thousands of web requests per second we get and reduce load on the web services, we implemented a 5-second cache of quote data.  Not quite long enough to where customers will typically notice a quote go “stale”, but just long enough to be able to collapse multiple quote requests for the same symbol in a short period of time. A 5-second cache may not sound like much, but it actually pays off by saving us roughly 42% of our web service calls, while still providing relatively up-to-date information.  The question is whether or not the extra memory involved in maintaining the cache was worth it, so I decided to fire up the ANTS Memory Profiler and take a look at memory usage. First Impressions The main thing I’ve always loved about the ANTS tools is their ease of use.  Pretty much everything is right there in front of you in a way that makes it easy for you to find what you need with little digging required.  I’ve worked with other, older profilers before (that shall remain nameless other than to hint it was created by a very large chip maker) where it was a mind boggling experience to figure out how to do simple tasks. Not so with AMP.  The opening dialog is very straightforward.  You can choose from here whether to debug an executable, a web application (either in IIS or from VS’s web development server), windows services, etc. So I chose a .NET Executable and navigated to the build location of my test harness.  Then began profiling. At this point while the application is running, you can see a chart of the memory as it ebbs and wanes with allocations and collections.  At any given point in time, you can take snapshots (to compare states) zoom in, or choose to stop at any time.  Snapshots Taking a snapshot also gives you a breakdown of the managed memory heaps for each generation so you get an idea how many objects are staying around for extended periods of time (as an object lives and survives collections, it gets promoted into higher generations where collection becomes less frequent). Generating a snapshot brings up an analysis view with very handy graphs that show your generation sizes.  Almost all my memory is in Generation 1 in the managed memory component of the first graph, which is good news to me, because Gen 2 collections are much rarer.  I once3 made the mistake once of caching data for 30 minutes and found it didn’t get collected very quick after I released my reference because it had been promoted to Gen 2 – doh! Analysis It looks like (from the second pie chart) that the majority of the allocations were in the string class.  This also is expected for me because the majority of the memory allocated is in the web service responses, so it doesn’t seem the entities I’m adapting to (to prevent being too tightly coupled to the web service proxy classes, which can change easily out from under me) aren’t taking a significant portion of memory. I also appreciate that they have clear summary text in key places such as “No issues with large object heap fragmentation were detected”.  For novice users, this type of summary information can be critical to getting them to use a tool and develop a good working knowledge of it. There is also a handy link at the bottom for “What to look for on the summary” which loads a web page of help on key points to look for. Clicking over to the session overview, it’s easy to compare the samples at each snapshot to see how your memory is growing, shrinking, or staying relatively the same.  Looking at my snapshots, I’m pretty happy with the fact that memory allocation and heap size seems to be fairly stable and in control: Once again, you can check on the large object heap, generation one heap, and generation two heap across each snapshot to spot trends. Back on the analysis tab, we can go to the [Class List] button to get an idea what classes are making up the majority of our memory usage.  As was little surprise to me, System.String was the clear majority of my allocations, though I found it surprising that the System.Reflection.RuntimeMehtodInfo came in second.  I was curious about this, so I selected it and went into the [Instance Categorizer].  This view let me see where these instances to RuntimeMehtodInfo were coming from. So I scrolled back through the graph, and discovered that these were being held by the System.ServiceModel.ChannelFactoryRefCache and I was satisfied this was just an artifact of my WCF proxy. I also like that down at the bottom of the Instance Categorizer it gives you a series of filters and offers to guide you on which filter to use based on the problem you are trying to find.  For example, if I suspected a memory leak, I might try to filter for survivors in growing classes.  This means that for instances of a class that are growing in memory (more are being created than cleaned up), which ones are survivors (not collected) from garbage collection.  This might allow me to drill down and find places where I’m holding onto references by mistake and not freeing them! Finally, if you want to really see all your instances and who is holding onto them (preventing collection), you can go to the “Instance Retention Graph” which creates a graph showing what references are being held in memory and who is holding onto them. Visual Studio Integration Of course, VS has its own profiler built in – and for a free bundled profiler it is quite capable – but AMP gives a much cleaner and easier-to-use experience, and when you install it you also get the option of letting it integrate directly into VS. So once you go back into VS after installation, you’ll notice an ANTS menu which lets you launch the ANTS profiler directly from Visual Studio.   Clicking on one of these options fires up the project in the profiler immediately, allowing you to get right in.  It doesn’t integrate with the Visual Studio windows themselves (like the VS profiler does), but still the plethora of information it provides and the clear and concise manner in which it presents it makes it well worth it. Summary If you like the ANTS series of tools, you shouldn’t be disappointed with the ANTS Memory Profiler.  It was so easy to use that I was able to jump in with very little product knowledge and get the information I was looking it for. I’ve used other profilers before that came with 3-inch thick tomes that you had to read in order to get anywhere with the tool, and this one is not like that at all.  It’s built for your everyday developer to get in and find their problems quickly, and I like that! Tweet Technorati Tags: Influencers,ANTS,Memory,Profiler

    Read the article

  • What is safe to exclude for a full system backup?

    - by seb
    Hi, I'm looking for a list which paths/files are safe to exclude for a full system/home backup. Considering that I have a list of installed packages. /home/*/.thumbnails /home/*/.cache /home/*/.mozilla/firefox/*.default/Cache /home/*/.mozilla/firefox/*.default/OfflineCache /home/*/.local/share/Trash /home/*/.gvfs/ /tmp/ /var/tmp/ not real folders but can cause severe problems when 'restoring' /dev /proc /sys What about... /var/ in general? /var/backups/ - can get quite large /var/log/ - does not require much space and can help for later comparison /lost+found/

    Read the article

  • Are short identifiers bad?

    - by Daniel C. Sobral
    Are short identifiers bad? How does identifier length correlate with code comprehension? What other factors (besides code comprehension) might be of consideration when it comes to naming identifiers? Just to try to keep the quality of the answers up, please note that there is some research on the subject already! Edit Curious that everyone either doesn't think length is relevant or tend to prefer larger identifiers, when both links I provided indicate large identifiers are harmful!

    Read the article

  • PASS Summit 2012 PreCon - DBA-298-P Automate and Manage SQL Server with PowerShell

    - by AllenMWhite
    On Tuesday I presented an all-day pre-conference session on using PowerShell to automate and manage SQL Server. It was a very full day and we had a lot of great questions. One discussion in Module 6 was around scripting all the objects in a database, and I'd mentioned the script I wrote for the book The Red Gate Guide to SQL Server Team-based Development . When putting together the demos for the attendees to download I realized I'd placed that script in the Module 6 folder, so you don't need to go...(read more)

    Read the article

  • Oracle WebCenter in Action: Best Practices from Oracle Consulting

    - by Kellsey Ruppel
    Oracle WebCenter in Action: Best Practices from Oracle ConsultingSee concrete, real-world examples of deployments throughout the Oracle WebCenter stack. Oracle Consulting will lead you through a discussion about best practices and key customer use cases, as well as offer practical tips to support web experience management, enterprise content management, and portal deployments.Watch this webcast as our presenters discuss: Best practices for deployments of large complex architectures with Oracle WebCenter Sites Key deployments and helpful hints for Oracle WebCenter Content Performance tuning takeaways when using Oracle WebCenter Portal Watch the webcast by registering now. REGISTER NOW

    Read the article

  • Silverlight Cream for March 07, 2011 -- #1055

    - by Dave Campbell
    In this Issue: Max Paulousky, Chris Rouw, David Anson, Jesse Liberty, Shawn Wildermuth, Simon Guindon, and Dhananjay Kumar. Above the Fold: Silverlight: "Faster Databinding in WPF and Silverlight using OptimizedObservableCollection" Simon Guindon WP7: "Phoney Tools Updated (WP7 Open Source Library)" Shawn Wildermuth From SilverlightCream.com: Problems With Sharing Windows Phone 7 Applications Within A Large Group Of Beta Testers Max Paulousky has a post up discussing the issues surrounding beta testing a WP7 app with a large group of testers... and how to pull it all off. WP7 Insights #1: Consuming REST APIs within a WP7 app Chris Rouw is beginning a WP7 series based on his recent experience of getting a client's app into the marketplace. This first in his series is on consuming REST APIs ... lots of good code and explanations. Improving Windows Phone 7 application performance is even easier with these LowProfileImageLoader and DeferredLoadListBox updates David Anson has an update to his LowProfileImageLoader and DeferredLoadListBox after issues brought up by readers... so we all win with the great feedback from alert devs. When Isolated Storage Isn’t Enough Jesse Liberty started looking at Jeremy Likness' Sterling with this post in the WP7 From Scratch series. He starts with downloading it from CodePlex ... great way to get into Sterling if you haven't already. Phoney Tools Updated (WP7 Open Source Library) Shawn Wildermuth has the latest drop of his Phoney Tools up... this is the last Alpha. I've added a tag for it as well. He's fixed some things, added others... check out the post and go grab the code. Faster Databinding in WPF and Silverlight using OptimizedObservableCollection Simon Guindon is a blogger I've not been following, but this post on an OptimizedObservableCollection caught my eye. He added an AddRange() to the ObservableCollection to get a speed enhancement when adding items... and a pretty good speed enhancement it is. Reading files asynchronously using WebClient class in Silverlight Dhananjay Kumar is another prolific blogger that I've not been following, so we'll start with his latest... a step-by-step guide to reading an XML file asynchronously. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • The Road to Professional Database Development: Database Normalization

    Not only is the process of normalization valuable for increasing data quality and simplifying the process of modifying data, but it actually makes the database perform much faster. To prove the point, Peter Larsson takes a large unnormalised database and subjects it to successive stages of normalisation. Get smart with SQL Backup ProGet faster, smaller backups with integrated verification.Quickly and easily DBCC CHECKDB your backups. Learn more.

    Read the article

  • Does the tempdb Log file get Zero Initialized at Startup?

    - by Jonathan Kehayias
    While working on a problem today I happened to think about what the impact to startup might be for a really large tempdb transaction log file.  Its fairly common knowledge that data files in SQL Server 2005+ on Windows Server 2003+ can be instant initialized, but the transaction log files can not.  If this is news to you see the following blog posts: Kimberly L. Tripp | Instant Initialization - What, Why and How? In Recovery... | Misconceptions around instant file initialization In Recovery…...(read more)

    Read the article

  • SQL Server PowerShell Provider follows the Version of PowerShell on the Host and other errata

    - by BuckWoody
    There may be some misunderstanding on how the PowerShell Provider for SQL Server works. I’ve written an article or two explaining that you can use PowerShell with SQL Server, without having the SQL Server 2008 (or higher) provider around. After all, PowerShell just uses .NET, and SQL Server “Server Management Objects” or SMO listen to that interface as well. In SQL Server 2008 and higher we created a “MiniShell” for PowerShell that gives you the ability to treat a SQL Server Instance as a drive (called a “Provider” or path or drive) and a few commands (called command-lets). Using these two simple constructs you can move around SQL Server quickly and work with the objects it holds. I read the other day where someone stated that we had “re-compiled” PowerShell, so that you would have version 1.0 from SQL Server and 2.0 on your new server. Not so! Drop to a SQLPS prompt and a PowerShell prompt and type this in each: $PSVersionTable They should return the same value. You can think of a MiniShell as simply a compiled “profile” that gives you those providers and command-lets automatically – that’s all. In fact, you can load the SMO libraries yourself without the SQL Server 2008 Provider anywhere in sight. I do this all the time, since the MiniShell also has other restrictions. Also remember that if you run a PowerShell script as a SQL Agent Job step type (in 2008 and higher) that you’re running under the context of the account that starts Agent – I think most folks know this, but it’s good to keep in mind. There’s a re-written section of Books Online that goes over working with this very nicely – also covers the question “How to I connect to another server using the SQL Server PowerShell Provider” (hint: It’s just CD) and “How do I load all the SMO stuff if I don’t want to use the Provider” and more. Be sure and check out the note at the bottom that explains the firewall exceptions you’ll need to enable to CD to that remote server. Here’s that link: http://msdn.microsoft.com/en-us/library/cc281947.aspx Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Warp GameObject Size When Entering/Leaving Area

    - by Julian
    Below I have an image describing the desired functionality I am going for. Let's say you control a square and when you move this square into a given area, any part of your rigidbody/model inside of the area will be magnified upon entering and shrunk upon leaving. So now you more or less are made up of two rectangles, one small and one large. What would be an elegant approach towards achieving this effect?

    Read the article

  • Oracle Joins XBRL US To Help Drive Adoption

    - by Theresa Hickman
    Recently, Oracle joined XBRL US, the national consortium for XML business reporting standards to stay ahead of the technology and help increase XBRL adoption by U.S. companies by 2011. Large accelerated filers were mandated to use XBRL starting in 2009; other large filers started in 2010 and all other public companies must comply in June 2011. Here is a list of other organizations that recently joined XBRL US: Oracle Citi Federal Filings LLC Edgar Agents LLC XSP For those of you who have been living under a rock, XBRL stands for eXtensible Business Reporting Language. Simply put, it's reporting electronically. Just like PDFs or spreadsheets are a type of output, XBRL is another output option in electronic form. Right now, the transition to XBRL means extra work for publicly traded companies because they need to file their financial statements in both EDGAR and XBRL formats. Once the SEC phases out the EDGAR system, XBRL will be the primary way to deliver financial information with footnotes and supporting schedules to multiple audiences without having to re-key or reformat the information. A single XBRL document can be converted to printed output, published via the Web, fed into an SEC database (e.g. EDGAR) or forwarded to a creditor for analysis. Question: How does Oracle support XBRL reporting? Answer: The latest XBRL 2.1 specifications are supported by Oracle Hyperion Disclosure Management, which is part of Oracle's Hyperion Financial Close Suite along with Hyperion Financial Management, Hyperion Financial Data Quality Management and Hyperion Financial Close Management. Hyperion Disclosure Management supports the authoring of financial filings in Microsoft Office, with "hot links" to reports and data stored in Hyperion Financial Management or Oracle Essbase. It supports the XBRL tagging of financial statements as well as the disclosures and footnotes within your 10K and 10Q filings. Because many of our customers use Hyperion Financial Management (HFM) for their consolidation needs, they simply generate XBRL statements from their consolidated financial results. Question: What if you don't use Hyperion Financial Management, and you only use E-Business Suite General Ledger or PeopleSoft General Ledger? Answer: No problem, all you need is Hyperion Disclosure Management to generate XBRL from your general ledger. Here are the steps: Upload the XBRL taxonomy from the SEC or XBRL website into Hyperion Disclosure Management. Publish your financial statements out of general ledger to Excel. Perform the XBRL tag mapping from the Excel output to Hyperion Disclosure Management. For more information and some interesting background on XBRL, I recommend reading What You Need To Know About XBRL written by our EPM expert, John O'Rourke.

    Read the article

  • Code and Slides: Getting Started Building Windows 8 HTML/JavaScript Metro Apps

    - by dwahlin
    This presentation is from a talk I gave at the spring 2012 DevConnections conference. It covers some of the key topics you need to know to get started building Windows 8 HTML/JavaScript Metro apps including navigation options, UI surfaces that can be used, controls, data binding and templates, and animations. View more of my presentations here. Sample code shown in the presentation can be found here. A large number of samples are available in the Windows 8 SDK which can be found here.

    Read the article

  • Does the type of prior employers matter when applying for a new job?

    - by Peter Smith
    Is there a bias in industry regarding the kind of previous employers an applicant has had (Government contractors, researchers, small business, large corporations)? I'm currently working for a University as a generalist programmer and I like my job here. But I'm worried that if I had to switch jobs down the road and apply for a corporate job that my resume would be dismissed based on the fact that I'm working in academia.

    Read the article

  • Retroactively applying a Piwik goal to visitors

    - by Andrew Aylett
    I started receiving a large (for me) amount of traffic on one of my pages yesterday. Today, I thought that it would be useful to track goals from that page -- there's a link to my blog from it. I added the 'visited external link' goal to Piwik, and new visits are being recorded. However, it seems to me that there must be enough data in the database to retroactively apply the goal to past users -- is there a way to achieve that?.

    Read the article

  • Get Started with .Net and Apache Cassandra

    - by Sazzad Hossain
    Just came across a easy and nice to read article explaining how to get started with noSQL database system. These no relational databases are getting increasingly popular to tackle the distribution and large data set problems.Cassandra's ColumnFamily data model offers the convenience of column indexes with the performance of log-structured updates, strong support for materialized views, and powerful built-in caching.The article is nicely written by Kellabyte  and shows step by step process how to get going with the programming in a .net platform.Read more here.

    Read the article

< Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >