Search Results

Search found 17468 results on 699 pages for 'expression design'.

Page 597/699 | < Previous Page | 593 594 595 596 597 598 599 600 601 602 603 604  | Next Page >

  • TouchDevelop: The Fast Path to Windows 8 and Phone Apps

    - by Clint Edmonson
    Are you looking for a little extra cash for the upcoming holidays? Then you might be interested in creating some cool apps to sell in the Windows Store. Or maybe you’re simply curious and want to try your hand at developing for Windows 8 and Windows Phone. In either case, the newly released TouchDevelop Web App is for you. TouchDevelop Web App is a development environment to create apps on your tablet or smartphone, without requiring a separate PC. Scripts written by using TouchDevelop can access data, media, and sensors on the phone, tablet, and PC. The script can interact with cloud services, including storage, computing, and social networks. TouchDevelop lets you quickly create fun games and useful tools, turning your scripts into true Windows Phone and Windows 8 apps. A year ago, Microsoft Research released TouchDevelop for Windows Phone, which is being used by enthusiasts, students, and researchers to program their phones in fun, inventive, and interesting ways. These scripts are available at TouchDevelop for anyone to download and use. Ever since we released TouchDevelop, we’ve been eyeing the tablet form factor and working on a version for the browser. Now, with the release of TouchDevelop Web App, the wait is over: the tablet version is ready, so go play around with it. All TouchDevelop scripts that are developed on the smartphone can be downloaded to the tablet and run (if hardware allows). Any script that is developed on the tablet can also be accessed on the phone. And scripts can be converted to Windows Phone or Windows 8 apps and submitted to the Windows Phone Store or Windows Store, respectively. TouchDevelop Web App’s editor and programming language have been designed for tablet devices with touchscreens, but you can also use a keyboard and a mouse. So grab your web-enabled device and give the TouchDevelop Web App a try. It’s fun and easy, and could even put a little cash in your holiday-depleted wallet. Or at least give you bragging rights at family get-togethers. Are you interested in further tips on Windows 8 development?  Sign up for the 30 to launch program which will help you build a Windows Store application in 30 days.  You will receive a tip per day for 30 days, along with potential free design consultations and technical support from a Windows 8 expert. As always, stay tuned to my twitter feed for Windows 8, Windows Azure and other Microsoft announcements, updates, and links: @clinted

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-05

    - by Bob Rhubart
    OTN Architect Day - Boston - Sept 12: What to Expect If you've never attended an OTN Architect Day, here's a little preview. You start with a continental breakfast. Then you have keynotes by an Oracle expert, and a member of the Oracle ACE community. After that come the break-out sessions, so you have your choice of two sessions in each time slot. So you'll get in two breakouts before lunch. Then you eat. After that there's a panel Q&A during which the audience tosses questions at the assembled session speakers. Then it's on to another set of break-out sessions, followed by a short break. Then the audience breaks into small groups for round table discussions. After that there's a drawing for some cool prizes, followed by the cocktail reception. All that costs you absolutely zero. Register now. Starting and Stopping Fusion Applications the Right Way | Ronaldo Viscuso While the fastartstop tool that ships with Oracle Fusion Applications does most of the work to start/stop/bounce the Fusion Apps environment, it does not do it all. Oracle Fusion Applications A-Team blogger Ronaldo Viscuso's post "aims to explain all tasks involved in starting and stopping a Fusion Apps environment completely." Dodeca Customer Feedback - The Rosewood Company | Tim Tow Oracle ACE Director Tim Tow shares anecdotal comments from one of his clients, a company that is deploying Dodeca to replace an aging VBA/Essbase application. Configuring UCM cache to check for external Content Server changes | Martin Deh Oracle WebCenter and ADF A-Team blogger Martin Deh shares the background information and the solution to a recently encountered customer scenario. Proxy As Upgrade to 11g Does Not Like NQSession.User | Art of Business Intelligence "In Oracle BI 10g the application was a lot more tolerant of bad design and cavalier usage of variables," observes Oracle ACE Christian Screen. "We noticed an issue recently during an upgrade where the Proxy As configuration in Oracle BI 10g used the NQSession.User variable to identify the user logged into Presentation Servers acting as Proxy." Oracle WebLogic Server 11g: Interactive Quick Reference | Dirk Nachbar Oracle ACE Dirk Nachbar shares a quick post with information on a new interactive reference guide to Oracle WebLogic Server. "The Quick Reference shows you an architecural overview of the Oracle WebLogic Server processes, tools, configuration files, log files and so on including a short description of each section and the corresponding link to the Oracle WebLogic Server Documentation," says Nachbar. Thought for the Day "In fast moving markets, adaptation is significantly more important than optimization." — Larry Constantine Source: Quotes for Software Engineers

    Read the article

  • Why would more CPU cores on virtual machine slow compile times?

    - by Sid
    [edit#2] If anyone from VMWare can hit me up with a copy of VMWare Fusion, I'd be more than happy to do the same as a VirtualBox vs VMWare comparison. Somehow I suspect the VMWare hypervisor will be better tuned for hyperthreading (see my answer too) I'm seeing something curious. As I increase the number of cores on my Windows 7 x64 virtual machine, the overall compile time increases instead of decreasing. Compiling is usually very well suited for parallel processing as in the middle part (post dependency mapping) you can simply call a compiler instance on each of your .c/.cpp/.cs/whatever file to build partial objects for the linker to take over. So I would have imagined that compiling would actually scale very well with # of cores. But what I'm seeing is: 8 cores: 1.89 sec 4 cores: 1.33 sec 2 cores: 1.24 sec 1 core: 1.15 sec Is this simply a design artifact due to a particular vendor's hypervisor implementation (type2:virtualbox in my case) or something more pervasive across more VMs to make hypervisor implementations more simpler? With so many factors, I seem to be able to make arguments both for and against this behavior - so if someone knows more about this than me, I'd be curious to read your answer. Thanks Sid [edit:addressing comments] @MartinBeckett: Cold compiles were discarded. @MonsterTruck: Couldn't find an opensource project to compile directly. Would be great but can't screwup my dev env right now. @Mr Lister, @philosodad: Have 8 hw threads, using VirtualBox, so should be 1:1 mapping without emulation @Thorbjorn: I have 6.5GB for the VM and a smallish VS2012 project - it's quite unlikely that I'm swapping in/out trashing the page file. @All: If someone can point to an open source VS2010/VS2012 project, that might be a better community reference than my (proprietary) VS2012 project. Orchard and DNN seem to need environment tweaking to compile in VS2012. I really would like to see if someone with VMWare Fusion also sees this (for VMWare vs VirtualBox compartmentalization) Test details: Hardware: Macbook Pro Retina CPU : Core i7 @ 2.3Ghz (quad core, hyper threaded = 8 cores in windows task manager) Memory : 16 GB Disk : 256GB SSD Host OS: Mac OS X 10.8 VM type: VirtualBox 4.1.18 (type 2 hypervisor) Guest OS: Windows 7 x64 SP1 Compiler: VS2012 compiling a solution with 3 C# Azure projects Compile times measure by VS2012 plugin called 'VSCommands' All tests run 5 times, first 2 runs discarded, last 3 averaged

    Read the article

  • Oracle JDeveloper 11gR2 Cookbook book review

    - by Chris Muir
    I recently received a free copy of Oracle JDeveloper 11gR2 Cookbook published by Packt Publishing for review. Readers of technical cookbooks would know this genre of text includes problems that developers will hit and the prescribed solutions, in this case for Oracle's Application Development Framework (ADF).  Books like this excel themselves on excellent coverage, a logical progress of solutions through out the book, and providing a readable narrative around the numerous steps and code. This book progresses well through ADF application assembly, ADF Business Components, the view layer, security, deployment and tuning.  Each recipe had a clear introduction and I especially enjoyed the "There's more" follow up sections for some recipes that leads the reader onto related ideas and issues the reader really needs to be aware of. Also worthy of comment having worked with ADF for over 5 years, there certainly was recipes and solutions I hadn't encountered before, this book gets bonus points for that. As a reviewer what negatives can I give this text? The book has cast it's net too wide by trying to cover "everything from design and construction, to deployment, testing, debugging and optimization."  ADF is such a large and sophistication technology, this book with 100 recipes barely scrapes the surface.  Don't expect all your ADF problems to be solved here. In turn there is inconsistency in the level of problems and solutions.  I felt at the beginning the book was pitching itself at advanced problems to solve (that's great for me), but then it introduces topics like building a static View Object or train.  These topics in my opinion are fairly simple and are covered by the Oracle documentation just as well, they shouldn't have been included here.  In conclusion, ADF beginners will find this book worthwhile as it will open your eyes to the wider problems and solutions required for ADF, and experts for just the fact they can point junior programmers at the book for certain problems and say "get on with it". Is there scope for more ADF tombs like this?  Yes!  I'd love to see a cookbook specializing on ADF Business Components (hint hint to budding authors).

    Read the article

  • What is 'Ubuntu Unity' (for the Desktop)?

    - by Martin
    Ok, so there's the buzz of Canonical (wanting to) switch for new Ubuntu version from the GNOME default desktop to their own Unity shell. (I hope that's accurate.) It seems I can not totally fathom what Unity actually is. For looking at its homepage it currently is firmly targeted at netbooks and the somehow different usage model on these. Is it a classical desktop? -- Taskbar? Shortcuts? Is the difference between Ubuntu(GNOME)+Unity more/less pronounced than the difference between Ubuntu and Kubuntu? Will "my parents" be able to get the interface if they've been using the classical gnome desktop so far? Edit: I would not like to split this up into more specific questions, as What is Unity? is exactly what the people I set up Ubuntu boxes for will ask me if they hear that the newer Ubuntu version is using that instead of the Desktop -- and it might well happen someone phrases it like that :-) I will certainly not give them the link to the HP as the explanation there does not lay out if it is a desktop or something more or something less: (It does not for me - therefore I'm asking here.) Unity is designed for netbooks and related touch-based devices. It includes [...] that makes it fast and easy to access [...] while removing screen elements that are rarely used in mobile and netbook computing. (emphasis mine) -- the explanation there doesn't even mention the desktop-PC! Unity has a vertical task management panel on the left-hand side and a menu panel at the top of the screen. [...] This sounds like a re-themed normal desktop. Clicking on an icon will give the target application focus if it is already running or launch it if it is not already running. If you click the ... Aha. Sounds like Windows 7. ... icon of an application that already has focus, Unity will activate an Expose-style view of all the open windows associated with that application. No clue what that's supposed to be. So it would really be nice if someone could explain for non desktop-design-terms experts what Unity is.

    Read the article

  • Java EE 7 Roadmap

    - by Linda DeMichiel
    The Java EE 6 Platform, released in December 2009, has seen great uptake from the community with its POJO-based programming model, lightweight Web Profile, and extension points. There are now 13 Java EE 6 compliant appserver implementations today! When we announced the Java EE 7 JSR back in early 2011, our plans were that we would release it by Q4 2012. This target date was slightly over three years after the release of Java EE 6, but at the same time it meant that we had less than two years to complete a fairly comprehensive agenda — to continue to invest in significant enhancements in simplification, usability, and functionality in updated versions of the JSRs that are currently part of the platform; to introduce new JSRs that reflect emerging needs in the community; and to add support for use in cloud environments. We have since announced a minor adjustment in our dates (to the spring of 2013) in order to accommodate the inclusion of JSRs of importance to the community, such as Web Sockets and JSON-P. At this point, however, we have to make a choice. Despite our best intentions, our progress has been slow on the cloud side of our agenda. Partially this has been due to a lack of maturity in the space for provisioning, multi-tenancy, elasticity, and the deployment of applications in the cloud. And partially it is due to our conservative approach in trying to get things "right" in view of limited industry experience in the cloud area when we started this work. Because of this, we believe that providing solid support for standardized PaaS-based programming and multi-tenancy would delay the release of Java EE 7 until the spring of 2014 — that is, two years from now and over a year behind schedule. In our opinion, that is way too long. We have therefore proposed to the Java EE 7 Expert Group that we adjust our course of action — namely, stick to our current target release dates, and defer the remaining aspects of our agenda for PaaS enablement and multi-tenancy support to Java EE 8. Of course, we continue to believe that Java EE is well-suited for use in the cloud, although such use might not be quite ready for full standardization. Even today, without Java EE 7, Java EE vendors such as Oracle, Red Hat, IBM, and CloudBees have begun to offer the ability to run Java EE applications in the cloud. Deferring the remaining cloud-oriented aspects of our agenda has several important advantages: It allows Java EE Platform vendors to gain more experience with their implementations in this area and thus helps us avoid risks entailed by trying to standardize prematurely in an emerging area. It means that the community won't need to wait longer for those features that are ready at the cost of those features that need more time. Because we have already laid some of the infrastructure for cloud support in Java EE 7, including resource definition metadata, improved security configuration, JPA schema generation, etc., it will allow us to expedite a Java EE 8 release. We therefore plan to target the Java EE 8 Platform release for the spring of 2015. This shift in the scope of Java EE 7 allows us to better retain our focus on enhancements in simplification and usability and to deliver on schedule those features that have been most requested by developers. These include the support for HTML 5 in the form of Web Sockets and JSON-P; the simplified JMS 2.0 APIs; improved Managed Bean alignment, including transactional interceptors; the JAX-RS 2.0 client API; support for method-level validation; a much more comprehensive expression language; and more. We feel strongly that this is the right thing to do, and we hope that you will support us in this proposed direction.

    Read the article

  • Moving from Silverlight 4 Beta to RC - Part 1

    The other day I had finished up my Task-It Webinar, written a few blog posts, and knew the time had come to move from my Silverlight 4 Beta environment up to the latest RC (release candidate) bits that were released last Monday. What disappointed me when I went to the Silverlight 4 Information Page is that it told me what to install, but not what to uninstall first. Uninstalling I'm not entirely sure if I had to uninstall anything, or if installing the new stuff would just work, but in poking around the web I found posts stating that you must uninstall the following items first. Unfortunately I'm going by memory here and have not been able to find my way back to the magic page I found in the myriad of posts that I went through: Microsoft VisualStudio Beta 2 Microsoft .NET Framework 4 Extended (apparently this must be done *before* the next one) Microsoft .NET Framework Client Profile Microsoft Silverlight 4 Tools for Visual Studio 2010 WCF RIA Services Preview for Visual Studio 2010 While I was at it, I removed a bunch of other stuff, like Blend 3, Blend 4, the SDK's associated with them, and a bunch other stuff that was old. Of course, I didn't really want/need to keep any Silverlight 3 stuff around as I am developing Task-It in Silverlight 4. If I need a Silverlight 3 environment at some point I'll set it up in a virtual environment. NOTE: One thing that I did not uninstall is the Microsoft Silverlight 4 Toolkit November 2009. The reason is that they have not released the March version yet, so if you uninstall this, youll end up having to reinstall it. Installing OK, now that I had all of that old stuff off my machine, now it was time to get the new stuff. For this part I liked Tim Heuer's post, A Guide to What has Changed in Silverlight 4 RC better than the Silverlight 4 Information Page. VisualStudio 2010 RC - I installed Ultimate, but you may not need that version. No harm, it's free for now anyway. I downloaded the .exe and the 3 .rar files, then ran the .exe. I then extracted the contents of the ISO (using WinZip) to a new directory, and now had Setup.exe, a bunch of .cab files and some other assorted stuff. I simply ran Setup.exe and chose custom install (only because I wanted to uncheck Visual C++...I don't really have a need for that) Silverlight 4 Tools for Visual Studio 2010 - As mentioned in Tim's blog post, this installs this installs Silverlight developer runtime, SDK, tools, and this installs Silverlight developer runtime, SDK, tools, and WCF RIA Services. WCF RIA Services Toolkit March 2010 - I'm not sure if/when I'll need any of this stuff, but no harm in installing it anyway. Expression Blend 4 beta - Only if you plan to use Blend, which I do. Windows Phone Developer Tools - Only if you are interested in playing with Windows Phone 7 development. Wrap Up Hopefully I got those steps right. If anyone finds anything I've missed, please just add a comment to this post and I'll update it accordingly.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Hadoop, NOSQL, and the Relational Model

    - by Phil Factor
    (Guest Editorial for the IT Pro/SysAdmin Newsletter)Whereas Relational Databases fit the world of commerce like a glove, it is useless to pretend that they are a perfect fit for all human endeavours. Although, with SQL Server, we’ve made great strides with indexing text, in processing spatial data and processing markup, there is still a problem in dealing efficiently with large volumes of ephemeral semi-structured data. Key-value stores such as Cassandra, Project Voldemort, and Riak are of great value for ephemeral data, and seem of equal value as a data-feed that provides aggregations to an RDBMS. However, the Document databases such as MongoDB and CouchDB are ideal for semi-structured data for which no fixed schema exists; analytics and logging are obvious examples. NoSQL products, such as MongoDB, tackle the semi-structured data problem with panache. MongoDB is designed with a simple document-oriented data model that scales horizontally across multiple servers. It doesn’t impose a schema, and relies on the application to enforce the data structure. This is another take on the old ‘EAV’ problem (where you don’t know in advance all the attributes of a particular entity) It uses a clever replica set design that allows automatic failover, and uses journaling for data durability. It allows indexing and ad-hoc querying. However, for SQL Server users, the obvious choice for handling semi-structured data is Apache Hadoop. There will soon be an ODBC Driver for Apache Hive .and an Add-in for Excel. Additionally, there are now two Hadoop-based connectors for SQL Server; the Apache Hadoop connector for SQL Server 2008 R2, and the SQL Server Parallel Data Warehouse (PDW) connector. We can connect to Hadoop process the semi-structured data and then store it in SQL Server. For one steeped in the culture of Relational SQL Databases, I might be expected to throw up my hands in the air in a gesture of contempt for a technology that was, judging by the overblown journalism on the subject, about to make my own profession as archaic as the Saggar makers bottom knocker (a potter’s assistant who helped the saggar maker to make the bottom of the saggar by placing clay in a metal hoop and bashing it). However, on the contrary, I find that I'm delighted with the advances made by the NoSQL databases in the past few years. Having the flow of ideas from the NoSQL providers will knock any trace of complacency out of the providers of Relational Databases and inspire them into back-fitting some features, such as horizontal scaling, with sharding and automatic failover into SQL-based RDBMSs. It will do the breed a power of good to benefit from all this lateral thinking.

    Read the article

  • Wrapping REST based Web Service

    - by PaulPerry
    I am designing a system that will be running online under Microsoft Windows Azure. One component is a REST based web service which will really be a wrapper (using proxy pattern) which calls the REST web services of a business partner, which has to do with BLOB storage (note: we are not using azure storage). The majority of the functionality will be taking a request, calling our partner web service, receiving the request and then passing that back to the client. There are a number of reasons for doing this, but one of the big ones is that we are going to support three clients: our desktop application (win and mac), mobile apps (iOS), and a web front end. Having a single API which we then send to our partner protects us if that partner ever changes. I want our service to support both JSON and XML for the data transfer format, JSON for web and probably XML for the desktop and mobile (we already have an XML parser in those products). Our partner also supports both of these formats. I was planning on using ASP.NET MVC 4 with the Web API. As I design this, the thing that concerns me is the static type checking of C#. What if the partner adds or removes elements from the data? We can probably defensively code for that, but I still feel some concern. Also, we have to do a fair amount of tedious coding, to setup our API and then to turn around and call our partner’s API. There probably is not much choice on it though. But, in the back of my mind I wonder if maybe a more dynamic language would be a better choice. I want to reach out and see if anybody has had to do this before, what technology solutions they have used to (I am not attached to this one, these days Azure can host other technologies), and if anybody who has done something like this can point out any issues that came up. Thanks! Researching the issue seems to only find solutions which focus on connecting a SOAP web service over a proxy server, and not what I am referring to here. Note: Cross posted (by suggestion) from http://stackoverflow.com/questions/11906802/wrapping-rest-based-web-service Thank you!

    Read the article

  • What if you could work on anything you wanted?

    - by Nick Harrison
    What if you could work on anything you wanted? Redgate is doing an experiment of sorts this week.  Called Down Tools Week.    The idea is that they stopped working on their regular projects for a week and strike out on something that catches their attention and drives their passion. Evidently in many cases, these projects have turned out to be new features in their existing products that individual were interested in, some were internal iniatives and some where evidently off the wall new ideas.   Today is show and tell where they will share with each other what they have been working on. There may well be some interesting announcements coming out of this.    The prospects are exciting. I understand that Google does something similar allowing their employees a specified amount of time to work on projects of their own choosing.    This has been the breeding ground for some of my favorite services. It is a shame that more companies do not follow such practices.   Now I know that most companies cannot afford to shut down everything for a week and sometimes you can't really explore an interesting idea in 8 hours a week or however much time Google allocates, but still it may be worth while. What would happen if your company gave you as an individual 1 week each quarter to work on a project of your own design and see what happens?   I would be happen if you still had to get approval for before your week long adventure. Personally, I think that this could be a very effective use of training budgets.   Give me a week to research something on my own and you would be amazed at what I can find out.    Maybe this should be the prerequisite before starting a new project.   Stagger the team onboarding but have everyone spend a week long sabbatical studying BizTalk before starting a project that will hinge on BizTalk. The show and tell afterwards is a great way to keep everyone honest or at least reassure management that everyone is honest.    If your goal was to spend a week researching and exploring a new technology and you had to do a show and tell afterwards to show off what you had learned, then everyone can learn a bit of what you just learned.     Sounds like a promising win win for me. Maybe it is a pipe dream, but what if .... What would you work on if given the opportunity to work on anything you wanted?

    Read the article

  • Motivating yourself to actually write the code after you've designed something

    - by dpb
    Does it happen only to me or is this familiar to you too? It's like this: You have to create something; a module, a feature, an entire application... whatever. It is something interesting that you have never done before, it is challenging. So you start to think how you are going to do it. You draw some sketches. You write some prototypes to test your ideas. You are putting different pieces together to get the complete view. You finally end up with a design that you like, something that is simple, clear to everybody, easy maintainable... you name it. You covered every base, you thought of everything. You know that you are going to have this class and that file and that database schema. Configure this here, adapt this other thingy there etc. But now, after everything is settled, you have to sit down and actually write the code for it. And is not challenging anymore.... Been there, done that! Writing the code now is just "formalities" and makes it look like re-iterating what you've just finished. At my previous job I sometimes got away with it because someone else did the coding based on my specifications, but at my new gig I'm in charge of the entire process so I have to do this too ('cause I get payed to do it). But I have a pet project I'm working on at home, after work and there is just me and no one is paying me to do it. I do the creative work and then when time comes to write it down I just don't feel like it (lets browse the web a little, see what's new on P.SE, on SO etc). I just want to move to the next challenging thing, and then to the next, and the next... Does this happen to you too? How do you deal with it? How do you convince yourself to go in and write the freaking code? I'll take any answer.

    Read the article

  • How employable am I as a programmer?

    - by dsimcha
    I'm currently a Ph.D. student in Biomedical Engineering with a concentration in computational biology and am starting to think about what I want to do after graduate school. I feel like I've accumulated a lot of programming skills while in grad school, but taken a very non-traditional path to learning all this stuff. I'm wondering whether I would have an easy time getting hired as a programmer and could fall back on that if I can't find a good job directly in my field, and if so whether I would qualify for a more prestigious position than "code monkey". Things I Have Going For Me Approximately 4 years of experience programming as part of my research. I believe I have a solid enough grasp of the fundamentals that I could pick up new languages and technologies pretty fast, and could demonstrate this in an interview. Good math and statistics skills. An extensive portfolio of open source work (and the knowledge that working on these projects implies): I wrote a statistics library in D, mostly from scratch. I wrote a parallelism library (parallel map, reduce, foreach, task parallelism, pipelining, etc.) that is currently in review for adoption by the D standard library. I wrote a 2D plotting library for D against the GTK Cairo backend. I currently use it for most of the figures I make for my research. I've contributed several major performance optimizations to the D garbage collector. (Most of these were low-hanging fruit, but it still shows my knowledge of low-level issues like memory management, pointers and bit twiddling.) I've contributed lots of miscellaneous bug fixes to the D standard library and could show the change logs to prove it. (This demonstrates my ability read other people's code.) Things I Have Going Against Me Most of my programming experience is in D and Python. I have very little to virtually no experience in the more established, "enterprise-y" languages like Java, C# and C++, though I have learned a decent amount about these languages from small, one-off projects and discussions about language design in the D community. In general I have absolutely no knowledge of "enterprise-y" technlogies. I've never used a framework before, possibly because most reusable code for scientific work and for D tends to call itself a "library" instead. I have virtually no formal computer science/software engineering training. Almost all of my knowledge comes from talking to programming geek friends, reading blogs, forums, StackOverflow, etc. I have zero professional experience with the official title of "developer", "software engineer", or something similar.

    Read the article

  • Legality of similar games

    - by Jamie Taylor
    This is my first question on GD.SE, and I hope it's in the right place. A little background: I'm an amateur (read: not explicitly employed to develop games, but am employed as a software developer) game developer and took a ComSci with Games Development degree. My Question: What is the legal situation/standpoint of creating a copycat title? I know that there are only N number of ways of solving a problem, and N number of ways to design a piece of software. Say that an independent developer designed a copycat game (a Tetris clone in this example) for instance, and decided to use that game to generate income for themselves as well as interest for their other products. Say the developer adds a disclaimer into the software along the lines of "based on , originally released c. by ." Are there any legal problems/grey areas with the developer in this example releasing this game, commercially? Would they run into legal problems? Should the developer in this example expect cease and desist orders or law suit claims from original publishers? Have original publishers been known to, effectively, kill independent projects because they are a little too close to older titles? I know that there was, at least, one attempt by a group of independent developers to remake Sonic the Hedgehog 2 and Sega shut them down. I also know of Sega shutting down development of the independent Streets of Rage Remake. I know that "but it's an old game, your honour," isn't a great legal standpoint when it comes to defending yourself. But, could an independent developer have a law suit filed against them for re-implementing an older title in a new way? I know that there are a lot of copycat versions of the older titles like Tetris available on app stores (and similar services), and that it would be very difficult for a major publisher to shut them all down. Regardless of this, is making a Tetris (or other game) copycat/clone illegal? We were taught lots of different things at University, but we never covered copyright law. I'm presuming that their thought behind it was "IF these students get jobs in games development, they wont need to know anything about the legal side of it, because their employers will have legal departments... presumably" tl;dr Is it illegal to create a clone or copycat of an old title, and make money from it?

    Read the article

  • Another question about handling game states

    - by Eva
    I'm making a game designed with the entity-component paradigm that uses systems to communicate between components as explained here. I've reached the point in my development that I need to add game states (such as paused, playing, level start, round start, game over, etc.), but I'm not sure how to do it with my framework. I've looked at this code example on game states which everyone seems to reference, but I don't think it fits with my framework. It seems to have each state handling its own drawing and updating. My framework has a SystemManager that handles all the updating using systems. For example, here's my RenderingSystem class: public class RenderingSystem extends GameSystem { private GameView gameView_; /** * Constructor * Creates a new RenderingSystem. * @param gameManager The game manager. Used to get the game components. */ public RenderingSystem(GameManager gameManager) { super(gameManager); } /** * Method: registerGameView * Registers gameView into the RenderingSystem. * @param gameView The game view registered. */ public void registerGameView(GameView gameView) { gameView_ = gameView; } /** * Method: triggerRender * Adds a repaint call to the event queue for the dirty rectangle. */ public void triggerRender() { Rectangle dirtyRect = new Rectangle(); for (GameObject object : getRenderableObjects()) { GraphicsComponent graphicsComponent = object.getComponent(GraphicsComponent.class); dirtyRect.add(graphicsComponent.getDirtyRect()); } gameView_.repaint(dirtyRect); } /** * Method: renderGameView * Renders the game objects onto the game view. * @param g The graphics object that draws the game objects. */ public void renderGameView(Graphics g) { for (GameObject object : getRenderableObjects()) { GraphicsComponent graphicsComponent = object.getComponent(GraphicsComponent.class); if (!graphicsComponent.isVisible()) continue; GraphicsComponent.Shape shape = graphicsComponent.getShape(); BoundsComponent boundsComponent = object.getComponent(BoundsComponent.class); Rectangle bounds = boundsComponent.getBounds(); g.setColor(graphicsComponent.getColor()); if (shape == GraphicsComponent.Shape.RECTANGULAR) { g.fill3DRect(bounds.x, bounds.y, bounds.width, bounds.height, true); } else if (shape == GraphicsComponent.Shape.CIRCULAR) { g.fillOval(bounds.x, bounds.y, bounds.width, bounds.height); } } } /** * Method: getRenderableObjects * @return The renderable game objects. */ private HashSet<GameObject> getRenderableObjects() { return gameManager.getGameObjectManager().getRelevantObjects( getClass()); } } Also all the updating in my game is event-driven. I don't have a loop like theirs that simply updates everything at the same time. I like my framework because it makes it easy to add new GameObjects, but doesn't have the problems some component-based designs encounter when communicating between components. I would hate to chuck it just to get pause to work. Is there a way I can add game states to my game without removing the entity-component design? Does the game state example actually fit my framework, and I'm just missing something?

    Read the article

  • What algorithms can I use to detect if articles or posts are duplicates?

    - by michael
    I'm trying to detect if an article or forum post is a duplicate entry within the database. I've given this some thought, coming to the conclusion that someone who duplicate content will do so using one of the three (in descending difficult to detect): simple copy paste the whole text copy and paste parts of text merging it with their own copy an article from an external site and masquerade as their own Prepping Text For Analysis Basically any anomalies; the goal is to make the text as "pure" as possible. For more accurate results, the text is "standardized" by: Stripping duplicate white spaces and trimming leading and trailing. Newlines are standardized to \n. HTML tags are removed. Using a RegEx called Daring Fireball URLs are stripped. I use BB code in my application so that goes to. (ä)ccented and foreign (besides Enlgish) are converted to their non foreign form. I store information about each article in (1) statistics table and in (2) keywords table. (1) Statistics Table The following statistics are stored about the textual content (much like this post) text length letter count word count sentence count average words per sentence automated readability index gunning fog score For European languages Coleman-Liau and Automated Readability Index should be used as they do not use syllable counting, so should produce a reasonably accurate score. (2) Keywords Table The keywords are generated by excluding a huge list of stop words (common words), e.g., 'the', 'a', 'of', 'to', etc, etc. Sample Data text_length, 3963 letter_count, 3052 word_count, 684 sentence_count, 33 word_per_sentence, 21 gunning_fog, 11.5 auto_read_index, 9.9 keyword 1, killed keyword 2, officers keyword 3, police It should be noted that once an article gets updated all of the above statistics are regenerated and could be completely different values. How could I use the above information to detect if an article that's being published for the first time, is already existing within the database? I'm aware anything I'll design will not be perfect, the biggest risk being (1) Content that is not a duplicate will be flagged as duplicate (2) The system allows the duplicate content through. So the algorithm should generate a risk assessment number from 0 being no duplicate risk 5 being possible duplicate and 10 being duplicate. Anything above 5 then there's a good possibility that the content is duplicate. In this case the content could be flagged and linked to the article's that are possible duplicates and a human could decide whether to delete or allow. As I said before I'm storing keywords for the whole article, however I wonder if I could do the same on paragraph basis; this would also mean further separating my data in the DB but it would also make it easier for detecting (2) in my initial post. I'm thinking weighted average between the statistics, but in what order and what would be the consequences...

    Read the article

  • Issue 56 - Super Stylesheets Skinning in DotNetNuke 5

    May 2010 Welcome to Issue 56 of DNN Creative Magazine In this issue we show you how to use the powerful new Super Stylesheets skinning feature in DotNetNuke 5. Super Stylesheets are ideal for both beginner and experienced skin designers, they provide skin layouts using CSS. The advantage of Super Stylesheets is that you can easily create a skin layout which works in all browsers without the need to learn complex CSS techniques. They are also very quick to build and you can change a skin layout in a matter of minutes rather than hours. We show you how to build a skin from the very beginning using Super Stylesheets, we show you how to create various skin layouts, as well as multi-layouts. We also show you how to style the skin, how to add tokens such as the logo, menu, login links etc. and walk you through how to create a fully working skin from scratch. Following this we continue the Open Web Studio tutorials, this month we demonstrate how to create an installable DotNetNuke PA module using OWS. This is an essential technique which allows you to package up the OWS applications that you have created and build them into an installable zip package. The zip file is then installable as a standard DotNetNuke module which means you can easily install your OWS applications on other DotNetNuke installations by simply installing them as a standard DotNetNuke module. To finish, we have part six of the "How to Build a News Application with DotNetMushroom Rapid Application Developer (RAD)" article, where we demonstrate how to create a News Carousel using RAD, JQuery and the JCarousel plugin. This issue comes complete with 15 videos. Skinning: Super Stylesheets Skinning in DotNetNuke 5 - DNN Layouts (12 videos - 98mins) Module Development Series: How to Create an Installable DotNetNuke PA Module Using OWS (3 videos - 23mins) How to Implement a News Carousel Using DotNetMushroom RAD and JQuery View issue 56 to download all of the videos in one zip file DNN Creative Magazine for DotNetNuke Web Designers Covering DotNetNuke module video reviews, video tutorials, mp3 interviews, resources and web design tips for working with DotNetNuke. In 56 issues we have created 578 videos!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Was I wrong about JavaScript?

    - by jboyer
    Yes, I was. Recently, I’ve taken a good hard look at JavaScript. I’ve used it before but mostly in the capacity of web design. Using JQuery to make your web page do cool stuff is different than really creating a JavaScript application using all of the language constructs. What I’m finding as I use it more is that I may have been wrong about my assumptions about it. Let me explain.   I enjoyed doing cool stuff with JQuery but the limited experience with JavaScript as a language coupled with the bad things that I heard about it led me to not have any real interest in it. However, JavaScript is ubiquitous on the web and if I want to do any web development, which I do, I need to learn it. So here I am, diving deep into the language with the help of the JavaScript Fundamentals training course at Pluralsight (great training for a low price) and the JavaScript: The Good Parts book by Douglas Crockford.   Now, there are certainly parts of JavaScript that are bad. I think these are well known by any developer that uses it. The parts that I feel are especially egregious are the following: The global object null vs. undefined truthy and falsy limited (nearly nonexistent) scoping ‘==’ and ‘===’ (I just don’t get the reason for coercion)   However, what I am finding hiding under the covers of the bad things is a good language. I am finding that I am legitimately enjoying JavaScript. This I was not expecting. I’m not going to go into a huge dissertation on what I like about it, but some things include: Object literal notation dynamic typing functional style (JavaScript: The Good Parts describes it as LISP in C clothing) JSON (better than XML) There are parts of JavaScript that seem strange to OOP developers like myself. However, just because it is different or seems strange does not mean it is bad. Some differences are quite interesting and useful.   I feel that it is important for developers to challenge their assumptions and also to be able to admit when they are wrong on a topic. Many different situations can arise that lead to this, such as choosing the wrong technology for a problem’s solution, misunderstanding the requirements, etc. I decided to challenge my assumptions about JavaScript instead of moving straight into CoffeeScript or Dart. After exploring it, I find that I am beginning to enjoy it the more I use it. As long as there are those like Crockford to help guide me in the right way to code in JavaScript, I can create elegant and efficient solutions to problems and add another ‘arrow’ to the ‘quiver’, so to speak. I do still intend to learn CoffeeScript to see what the hub-bub is about, but now I no longer have to be afraid of JavaScript as a legitimate programming language.   Has something similar ever happened to you? Tell me about it in the comments below.

    Read the article

  • C#, Delegates and LINQ

    - by JustinGreenwood
    One of the topics many junior programmers struggle with is delegates. And today, anonymous delegates and lambda expressions are profuse in .net APIs.  To help some VB programmers adapt to C# and the many equivalent flavors of delegates, I walked through some simple samples to show them the different flavors of delegates. using System; using System.Collections.Generic; using System.Linq; namespace DelegateExample { class Program { public delegate string ProcessStringDelegate(string data); public static string ReverseStringStaticMethod(string data) { return new String(data.Reverse().ToArray()); } static void Main(string[] args) { var stringDelegates = new List<ProcessStringDelegate> { //========================================================== // Declare a new delegate instance and pass the name of the method in new ProcessStringDelegate(ReverseStringStaticMethod), //========================================================== // A shortcut is to just and pass the name of the method in ReverseStringStaticMethod, //========================================================== // You can create an anonymous delegate also delegate (string inputString) //Scramble { var outString = inputString; if (!string.IsNullOrWhiteSpace(inputString)) { var rand = new Random(); var chs = inputString.ToCharArray(); for (int i = 0; i < inputString.Length * 3; i++) { int x = rand.Next(chs.Length), y = rand.Next(chs.Length); char c = chs[x]; chs[x] = chs[y]; chs[y] = c; } outString = new string(chs); } return outString; }, //========================================================== // yet another syntax would be the lambda expression syntax inputString => { // ROT13 var array = inputString.ToCharArray(); for (int i = 0; i < array.Length; i++) { int n = (int)array[i]; n += (n >= 'a' && n <= 'z') ? ((n > 'm') ? 13 : -13) : ((n >= 'A' && n <= 'Z') ? ((n > 'M') ? 13 : -13) : 0); array[i] = (char)n; } return new string(array); } //========================================================== }; // Display the results of the delegate calls var stringToTransform = "Welcome to the jungle!"; System.Console.ForegroundColor = ConsoleColor.Cyan; System.Console.Write("String to Process: "); System.Console.ForegroundColor = ConsoleColor.Yellow; System.Console.WriteLine(stringToTransform); stringDelegates.ForEach(delegatePointer => { System.Console.WriteLine(); System.Console.ForegroundColor = ConsoleColor.Cyan; System.Console.Write("Delegate Method Name: "); System.Console.ForegroundColor = ConsoleColor.Magenta; System.Console.WriteLine(delegatePointer.Method.Name); System.Console.ForegroundColor = ConsoleColor.Cyan; System.Console.Write("Delegate Result: "); System.Console.ForegroundColor = ConsoleColor.White; System.Console.WriteLine(delegatePointer(stringToTransform)); }); System.Console.ReadKey(); } } } The output of the program is below: String to Process: Welcome to the jungle! Delegate Method Name: ReverseStringStaticMethod Delegate Result: !elgnuj eht ot emocleW Delegate Method Name: ReverseStringStaticMethod Delegate Result: !elgnuj eht ot emocleW Delegate Method Name: b__1 Delegate Result: cg ljotWotem!le une eh Delegate Method Name: b__2 Delegate Result: dX_V|`X ?| ?[X ]?{Z_X!

    Read the article

  • CES 2011–Microsoft Keynote Impressions

    - by guybarrette
    Microsoft has been kicking off the CES for a number of years by doing a keynote the evening of the event first day.  This year, SteveB talked about Xbox, Kinect, Windows 7 new laptops, Surface 2 and Windows vNext running on the ARM architecture. Some of the design of the new laptops showed are quite amazing.  This one has a dual screen with no physical keyboard.  The image is split between both screens.  A software keyboard appears when you place your 10 fingers on the lower screen. This one from Samsung has a sliding keyboard somewhat like numerous cell phones have. What I found the most amazing is that Intel was able to miniaturized a full Intel architecture (CPU, motherboard, memory) in a tiny form factor.  Imagine having the power of a full PC running .NET apps in a Zune/iPod form factor! They also showed V2 of the Surface device.  This one is called the Samsung SUR40 for Surface PC.  It’s much sleeker and it will likely loose the BAT (Big Ass Table) moniker  More info here SteveB announced that Windows vNext will run on ARM chips.  I’m intrigued by this announcement (you can read about it here) and I have many questions: -In the past ARM devices were slow, what now makes the ARM architecture able to run Windows? -ARM is 32-bit only, I think. -Does this mean that Intel wasn't able to provide such a lightweight architecture or simply that they weren't interested? -From what I understand, apps would need to be recompiled for ARM. Will we need to do that from an ARM PC or could it be done natively on Intel or on an Intel PC running in an ARM VM?  VS 2012? Ahhhh, smells like a cool PDC is coming up    Clearly it looks like PC have enough power for most of us right now and that the race is now about miniaturization, power consumption and battery life. You can watch the Microsoft CES 2011 keynote here var addthis_pub="guybarrette";

    Read the article

  • 3 Reasons You Need To Know Something About Every Technology

    - by Tim Murphy
    I make my living as a consultant and a general technologist.  I credit my success to the fact that I have never been afraid to pick up any product, language or platform needed to get the job done.  While Microsoft technologies I my mainstay, I have done work on mainframe and UNIX platforms and have worked with a wide variety of database engines.  Each one has it’s use and most times it is less expensive to find a way to communicate with an existing system than to replace it. So what are the main benefits of expending the effort to learn a new technology? New ways to solve problems Accelerate development Advise clients and get new business opportunities By new technology I mean ones that you haven’t had experience with before.  They don’t have to be the the one that just came out yesterday.  As they say, those who do not learn from history are bound to repeat it.  If you can learn something from an older technology it can be just as valuable as the shiny new one.  Either way, when you add another tool to your kit you get a new view on each problem you face.  This makes it easier to create a sound solution. The next thing you can learn from working with different products and techniques is how to more efficiently develop solve problems.  Many times if you are working with a new language you will find that there are specific design patterns that are used with it in normal use.  These can usually be applied with most languages.  You just needed to be exposed to them. The last point is about helping your clients and helping yourself.  If you can get in on technologies early you will have advantage over your competition in the market.  You will also be able to honestly advise you client on why they should or should not go with a new product.  Being able to compare products and their features is always an ability that stake holders appreciate. You don’t need to learn every detail of a product.  Learn enough to function and get an idea of how to use the technology.  Keep eating those technology Wheaties and you will be ready to go the distance in any project. del.icio.us Tags: Technology,technologists,technology generalist,Software Architecture

    Read the article

  • Achieving decoupling in Model classes

    - by Guven
    I am trying to test-drive (or at least write unit tests) my Model classes but I noticed that my classes end up being too coupled. Since I can't break this coupling, writing unit tests is becoming harder and harder. To be more specific: Model Classes: These are the classes that hold the data in my application. They resemble pretty much the POJO (plain old Java objects), but they also have some methods. The application is not too big so I have around 15 model classes. Coupling: Just to give an example, think of a simple case of Order Header - Order Item. The header knows the item and the item knows the header (needs some information from the header for performing certain operations). Then, let's say there is the relationship between Order Item - Item Report. The item report needs the item as well. At this point, imagine writing tests for Item Report; you need have a Order Header to carry out the tests. This is a simple case with 3 classes; things get more complicated with more classes. I can come up with decoupled classes when I design algorithms, persistence layers, UI interactions, etc... but with model classes, I can't think of a way to separate them. They currently sit as one big chunk of classes that depend on each other. Here are some workarounds that I can think of: Data Generators: I have a package that generates sample data for my model classes. For example, the OrderHeaderGenerator class creates OrderHeaders with some basic data in it. I use the OrderHeaderGenerator from my ItemReport unit-tests so that I get an instance to OrderHeader class. The problem is these generators get complicated pretty fast and then I also need to test these generators; defeating the purpose a little bit. Interfaces instead of dependencies: I can come up with interfaces to get rid of the hard dependencies. For example, the OrderItem class would depend on the IOrderHeader interface. So, in my unit tests, I can easily mock the behaviour of an OrderHeader with a FakeOrderHeader class that implements the IOrderHeader interface. The problem with this approach is the complexity that the Model classes would end up having. Would you have other ideas on how to break this coupling in the model classes? Or, how to make it easier to unit-test the model classes?

    Read the article

  • Should EICAR be updated to test the revision of Antivirus system?

    - by makerofthings7
    I'm posting this here since programmers write viruses, and AV software. They also have the best knowledge of heuristics and how AV systems work (cloaking etc). The EICAR test file was used to functionally test an antivirus system. As it stands today almost every AV system will flag EICAR as being a "test" virus. For more information on this historic test virus please click here. Currently the EICAR test file is only good for testing the presence of an AV solution, but it doesn't check for engine file or DAT file up-to-dateness. In other words, why do a functional test of a system that could have definition files that are more than 10 years old. With the increase of zero day threats it doesn't make much sense to functionally test your system using EICAR. That being said, I think EICAR needs to be updated/modified to be effective test that works in conjunction with an AV management solution. This question is about real world testing, without using live viruses... which is the intent of the original EICAR. That being said I'm proposing a new EICAR file format with the appendage of an XML blob that will conditionally cause the Antivirus engine to respond. X5O!P%@AP[4\PZX54(P^)7CC)7}$EICAR-EXTENDED-ANTIVIRUS-TEST-FILE!$H+H* <?xml version="1.0"?> <engine-valid-from>2010-1-1Z</engine-valid-from> <signature-valid-from>2010-1-1Z</signature-valid-from> <authkey>MyTestKeyHere</authkey> In this sample, the antivirus engine would only alert on the EICAR file if both the signature or engine file is equal to or newer than the valid-from date. Also there is a passcode that will protect the usage of EICAR to the system administrator. If you have a backgound in "Test Driven Design" TDD for software you may get that all I'm doing is applying the principals of TDD to my infrastructure. Based on your experience and contacts how can I make this idea happen?

    Read the article

  • Creating Corporate Windows Phone Applications

    - by Tim Murphy
    Most developers write Windows Phone applications for their own gratification and their own wallets.  While most of the time I would put myself in the same camp, I am also a consultant.  This means that I have corporate clients who want corporate solutions.  I recently got a request for a system rebuild that includes a Windows Phone component.  This brought up the questions of what are the important aspects to consider when building for this situation. Let’s break it down in to the points that are important to a company using a mobile application.  The company want to make sure that their proprietary software is safe from use by unauthorized users.  They also want to make sure that the data is secure on the device. The first point is a challenge.  There is no such thing as true private distribution in the Windows Phone ecosystem at this time.  What is available is the ability to specify you application for targeted distribution.  Even with targeted distribution you can’t ensure that only individuals within your organization will be able to load you application.  Because of this I am taking two additional steps.  The first is to register the phone’s DeviceUniqueId within your system.  Add a system sign-in and that should cover access to your application. The second half of the problem is securing the data on the phone.  This is where the ProtectedData API within the System.Security.Cryptography namespace comes in.  It allows you to encrypt your data before pushing it to isolated storage on the device. With the announcement of Windows Phone 8 coming this fall, many of these points will have different solutions.  Private signing and distribution of applications will be available.  We will also have native access to BitLocker.  When you combine these capabilities enterprise application development for Windows Phone will be much simpler.  Until then work with the above suggestions to develop your enterprise solutions. del.icio.us Tags: Windows Phone 7,Windows Phone,Corporate Deployment,Software Design,Mango,Targeted Applications,ProtectedData API,Windows Phone 8

    Read the article

  • Common SOA Problems by C2B2

    - by JuergenKress
    SOA stands for Service Oriented Architecture and has only really come together as a concrete approach in the last 15 years or so, although the concepts involved have been around for longer. Oracle SOA Suite is based around the Service Component Architecture (SCA) devised by the Open SOA collaboration of companies including Oracle and IBM. SCA, as used in SOA suite, is designed as a way to crystallise the concepts of SOA into a standard which ensures that SOA principles like the separation of application and business logic are maintained. Orchestration or Integration? A common thing to see with many people who are beginning to either build a new SOA based infrastructure, or move an old system to be service oriented, is confusion in the purpose of SOA technologies like BPEL and enterprise service buses. For a lot of problems, orchestration tools like BPEL or integration tools like an ESB will both do the job and achieve the right objectives; however it’s important to remember that, although a hammer can be used to drive a screw into wood, that doesn’t mean it’s the best way to do it. Service Integration is the act of connecting components together at a low level, which usually results in a single external endpoint for you to expose to your customers or other teams within your organisation – a simple product ordering system, for example, might integrate a stock checking service and a payment processing service. Process Orchestration, however, is generally a higher level approach whereby the (often externally exposed) service endpoints are brought together to track an end-to-end business process. This might include the earlier example of a product ordering service and couple it with a business rules service and human task to handle edge-cases. A good (but not exhaustive) rule-of-thumb is that integrations performed by an ESB will usually be real-time, whereas process orchestration in a SOA composite might comprise processes which take a certain amount of time to complete, or have to wait pending manual intervention. BPEL vs BPMN For some, with pre-existing SOA or business process projects, this decision is effectively already made. For those embarking on new projects it’s certainly an important consideration for those using Oracle SOA software since, due to the components included in SOA Suite and BPM Suite, the choice of which to buy is determined by what they offer. Oracle SOA suite has no BPMN engine, whereas BPM suite has both a BPMN and a BPEL engine. SOA suite has the ESB component “Mediator”, whereas BPM suite has none. Decisions must be made, therefore, on whether just one or both process modelling languages are to be used. The wrong decision could be costly further down the line. Design for performance: Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: C2B2,SOA best practice,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Sql Table Refactoring Challenge

    Ive been working a bit on cleaning up a large table to make it more efficient.  I pretty much know what I need to do at this point, but I figured Id offer up a challenge for my readers, to see if they can catch everything I have as well as to see if Ive missed anything.  So to that end, I give you my table: CREATE TABLE [dbo].[lq_ActivityLog]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [PlacementID] [int] NOT NULL, [CreativeID] [int] NOT NULL, [PublisherID] [int] NOT NULL, [CountryCode] [nvarchar](10) NOT NULL, [RequestedZoneID] [int] NOT NULL, [AboveFold] [int] NOT NULL, [Period] [datetime] NOT NULL, [Clicks] [int] NOT NULL, [Impressions] [int] NOT NULL, CONSTRAINT [PK_lq_ActivityLog2] PRIMARY KEY CLUSTERED ( [Period] ASC, [PlacementID] ASC, [CreativeID] ASC, [PublisherID] ASC, [RequestedZoneID] ASC, [AboveFold] ASC, [CountryCode] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY] And now some assumptions and additional information: The table has 200,000,000 rows currently PlacementID ranges from 1 to 5000 and should support at least 50,000 CreativeID ranges from 1 to 5000 and should support at least 50,000 PublisherID ranges from 1 to 500 and should support at least 50,000 CountryCode is a 2-character ISO standard (e.g. US) and there is a country table with an integer ID already.  There are < 300 rows. RequestedZoneID ranges from 1 to 100 and should support at least 50,000 AboveFold has values of 1, 0, or 1 only. Period is a date (no time). Clicks range from 0 to 5000. Impressions range from 0 to 5000000. The table is currently write-mostly.  Its primary purpose is to log advertising activity as quickly as possible.  Nothing in the rest of the system reads from it except for batch jobs that pull the data into summary tables. Heres the current information on the database tables size: Design Goals This table has been in use for about 5 years and has performed very well during that time.  The only complaints we have are that it is quite large and also there are occasionally timeouts for queries that reference it, particularly when batch jobs are pulling data from it.  Any changes should be made with an eye toward keeping write performance optimal  while trying to reduce space and improve read performance / eliminate timeouts during read operations. Refactor There are, I suggest to you, some glaringly obvious optimizations that can be made to this table.  And Im sure there are some ninja tweaks known to SQL gurus that would be a big help as well.  Ill post my own suggested changes in a follow-up post for now feel free to comment with your suggestions. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 593 594 595 596 597 598 599 600 601 602 603 604  | Next Page >