Search Results

Search found 19136 results on 766 pages for 'library understanding'.

Page 328/766 | < Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >

  • SQL Rally Pre-Con: Data Warehouse Modeling – Making the Right Choices

    - by Davide Mauri
    As you may have already learned from my old post or Adam’s or Kalen’s posts, there will be two SQL Rally in North Europe. In the Stockholm SQL Rally, with my friend Thomas Kejser, I’ll be delivering a pre-con on Data Warehouse Modeling: Data warehouses play a central role in any BI solution. It's the back end upon which everything in years to come will be created. For this reason, it must be rock solid and yet flexible at the same time. To develop such a data warehouse, you must have a clear idea of its architecture, a thorough understanding of the concepts of Measures and Dimensions, and a proven engineered way to build it so that quality and stability can go hand-in-hand with cost reduction and scalability. In this workshop, Thomas Kejser and Davide Mauri will share all the information they learned since they started working with data warehouses, giving you the guidance and tips you need to start your BI project in the best way possible?avoiding errors, making implementation effective and efficient, paving the way for a winning Agile approach, and helping you define how your team should work so that your BI solution will stand the test of time. You'll learn: Data warehouse architecture and justification Agile methodology Dimensional modeling, including Kimball vs. Inmon, SCD1/SCD2/SCD3, Junk and Degenerate Dimensions, and Huge Dimensions Best practices, naming conventions, and lessons learned Loading the data warehouse, including loading Dimensions, loading Facts (Full Load, Incremental Load, Partitioned Load) Data warehouses and Big Data (Hadoop) Unit testing Tracking historical changes and managing large sizes With all the Self-Service BI hype, Data Warehouse is become more and more central every day, since if everyone will be able to analyze data using self-service tools, it’s better for him/her to rely on correct, uniform and coherent data. Already 50 people registered from the workshop and seats are limited so don’t miss this unique opportunity to attend to this workshop that is really a unique combination of years and years of experience! http://www.sqlpass.org/sqlrally/2013/nordic/Agenda/PreconferenceSeminars.aspx See you there!

    Read the article

  • Azure eBook Update #1 &ndash; 16 authors so far!

    - by Eric Nelson
    I just wanted to share with folks where we are up to with the Windows Azure eBook (Check out the original post for full details) I have had lots of great submissions from folks with some awesome stuff to share on Azure. Currently we have 16 authors and 25 proposed articles. There is still a couple of days left to submit your proposal if you would like to get involved (see the original post ) and some topic suggestions below for which we don’t currently have authors. It is official – I’m excited! :-) Article Area Accepted Wikipedia Explorer: A case study how we did it and why. CaseSetudy Optional Patterns for the Windows Azure Platform (picking up 1 or 2 patterns that seem to be evolving) Architecture Optional Azure and cost-oriented architecture. Architecture Yes Code walkthrough of a comprehensive application submitted to newCloudApp contest CaseSetudy Yes Principles of highly scalable apps on Azure Compute Optional Auto-Scaling Azure Compute Yes Implementing a distributed cache using memcached with worker roles Interop Yes Building a content-based router service to direct requests to internal HTTP endpoints Compute Optional How to debug an Azure app by with a custom TraceListener & the AppFabric Service Bus AppFabric Yes How to host Java apps in Azure Interop Yes Bing Maps Tile Servers using Azure Blog Storage Interop Yes Tricks for storing time and date fields in Table Storage Storage Yes Service Runtime in Windows Azure Compute Yes Azure Drive Storage Optional Queries in Azure Table Storage Optional Getting RubyOnRails running on Azure Interop Yes Consuming Azure services within Windows Phone Interop Yes De-risking Your First Azure Project Architecture Yes Designing for failure Architecture Optional Connecting to SQL Azure In x Minutes SQLAzure Yes Using Azure Table Service as a NoSQL store via the REST API Storage Yes Azure Table Service REST API Storage Optional Threading, Scalability and Reliability in the Cloud Compute Yes Azure Diagnostics Compute Yes 5 steps to getting started with Windows Azure Introduction Yes The best tools for working with Windows Azure Tools Author Needed Understanding how SQL Azure works SQLAzure Author Needed Getting started with AppFabric Control Services AppFabric Author Needed Using the Microsoft Sync Framework with SQL Azure SQLAzure Author Needed Dallas - just a TV show or something more? Dallas Author Needed Comparing Azure to other cloud offerings Interop Author Needed Hybrid solutions using Azure and on-premise Interop Author Needed

    Read the article

  • Constant game speed independent of variable FPS in OpenGL with GLUT?

    - by Nazgulled
    I've been reading Koen Witters detailed article about different game loop solutions but I'm having some problems implementing the last one with GLUT, which is the recommended one. After reading a couple of articles, tutorials and code from other people on how to achieve a constant game speed, I think that what I currently have implemented (I'll post the code below) is what Koen Witters called Game Speed dependent on Variable FPS, the second on his article. First, through my searching experience, there's a couple of people that probably have the knowledge to help out on this but don't know what GLUT is and I'm going to try and explain (feel free to correct me) the relevant functions for my problem of this OpenGL toolkit. Skip this section if you know what GLUT is and how to play with it. GLUT Toolkit: GLUT is an OpenGL toolkit and helps with common tasks in OpenGL. The glutDisplayFunc(renderScene) takes a pointer to a renderScene() function callback, which will be responsible for rendering everything. The renderScene() function will only be called once after the callback registration. The glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0) takes the number of milliseconds to pass before calling the callback processAnimationTimer(). The last argument is just a value to pass to the timer callback. The processAnimationTimer() will not be called each TIMER_MILLISECONDS but just once. The glutPostRedisplay() function requests GLUT to render a new frame so we need call this every time we change something in the scene. The glutIdleFunc(renderScene) could be used to register a callback to renderScene() (this does not make glutDisplayFunc() irrelevant) but this function should be avoided because the idle callback is continuously called when events are not being received, increasing the CPU load. The glutGet(GLUT_ELAPSED_TIME) function returns the number of milliseconds since glutInit was called (or first call to glutGet(GLUT_ELAPSED_TIME)). That's the timer we have with GLUT. I know there are better alternatives for high resolution timers, but let's keep with this one for now. I think this is enough information on how GLUT renders frames so people that didn't know about it could also pitch in this question to try and help if they fell like it. Current Implementation: Now, I'm not sure I have correctly implemented the second solution proposed by Koen, Game Speed dependent on Variable FPS. The relevant code for that goes like this: #define TICKS_PER_SECOND 30 #define MOVEMENT_SPEED 2.0f const int TIMER_MILLISECONDS = 1000 / TICKS_PER_SECOND; int previousTime; int currentTime; int elapsedTime; void renderScene(void) { (...) // Setup the camera position and looking point SceneCamera.LookAt(); // Do all drawing below... (...) } void processAnimationTimer(int value) { // setups the timer to be called again glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0); // Get the time when the previous frame was rendered previousTime = currentTime; // Get the current time (in milliseconds) and calculate the elapsed time currentTime = glutGet(GLUT_ELAPSED_TIME); elapsedTime = currentTime - previousTime; /* Multiply the camera direction vector by constant speed then by the elapsed time (in seconds) and then move the camera */ SceneCamera.Move(cameraDirection * MOVEMENT_SPEED * (elapsedTime / 1000.0f)); // Requests to render a new frame (this will call my renderScene() once) glutPostRedisplay(); } void main(int argc, char **argv) { glutInit(&argc, argv); (...) glutDisplayFunc(renderScene); (...) // Setup the timer to be called one first time glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0); // Read the current time since glutInit was called currentTime = glutGet(GLUT_ELAPSED_TIME); glutMainLoop(); } This implementation doesn't fell right. It works in the sense that helps the game speed to be constant dependent on the FPS. So that moving from point A to point B takes the same time no matter the high/low framerate. However, I believe I'm limiting the game framerate with this approach. Each frame will only be rendered when the time callback is called, that means the framerate will be roughly around TICKS_PER_SECOND frames per second. This doesn't feel right, you shouldn't limit your powerful hardware, it's wrong. It's my understanding though, that I still need to calculate the elapsedTime. Just because I'm telling GLUT to call the timer callback every TIMER_MILLISECONDS, it doesn't mean it will always do that on time. I'm not sure how can I fix this and to be completely honest, I have no idea what is the game loop in GLUT, you know, the while( game_is_running ) loop in Koen's article. But it's my understanding that GLUT is event-driven and that game loop starts when I call glutMainLoop() (which never returns), yes? I thought I could register an idle callback with glutIdleFunc() and use that as replacement of glutTimerFunc(), only rendering when necessary (instead of all the time as usual) but when I tested this with an empty callback (like void gameLoop() {}) and it was basically doing nothing, only a black screen, the CPU spiked to 25% and remained there until I killed the game and it went back to normal. So I don't think that's the path to follow. Using glutTimerFunc() is definitely not a good approach to perform all movements/animations based on that, as I'm limiting my game to a constant FPS, not cool. Or maybe I'm using it wrong and my implementation is not right? How exactly can I have a constant game speed with variable FPS? More exactly, how do I correctly implement Koen's Constant Game Speed with Maximum FPS solution (the fourth one on his article) with GLUT? Maybe this is not possible at all with GLUT? If not, what are my alternatives? What is the best approach to this problem (constant game speed) with GLUT? I originally posted this question on Stack Overflow before being pointed out about this site. The following is a different approach I tried after creating the question in SO, so I'm posting it here too. Another Approach: I've been experimenting and here's what I was able to achieve now. Instead of calculating the elapsed time on a timed function (which limits my game's framerate) I'm now doing it in renderScene(). Whenever changes to the scene happen I call glutPostRedisplay() (ie: camera moving, some object animation, etc...) which will make a call to renderScene(). I can use the elapsed time in this function to move my camera for instance. My code has now turned into this: int previousTime; int currentTime; int elapsedTime; void renderScene(void) { (...) // Setup the camera position and looking point SceneCamera.LookAt(); // Do all drawing below... (...) } void renderScene(void) { (...) // Get the time when the previous frame was rendered previousTime = currentTime; // Get the current time (in milliseconds) and calculate the elapsed time currentTime = glutGet(GLUT_ELAPSED_TIME); elapsedTime = currentTime - previousTime; /* Multiply the camera direction vector by constant speed then by the elapsed time (in seconds) and then move the camera */ SceneCamera.Move(cameraDirection * MOVEMENT_SPEED * (elapsedTime / 1000.0f)); // Setup the camera position and looking point SceneCamera.LookAt(); // All drawing code goes inside this function drawCompleteScene(); glutSwapBuffers(); /* Redraw the frame ONLY if the user is moving the camera (similar code will be needed to redraw the frame for other events) */ if(!IsTupleEmpty(cameraDirection)) { glutPostRedisplay(); } } void main(int argc, char **argv) { glutInit(&argc, argv); (...) glutDisplayFunc(renderScene); (...) currentTime = glutGet(GLUT_ELAPSED_TIME); glutMainLoop(); } Conclusion, it's working, or so it seems. If I don't move the camera, the CPU usage is low, nothing is being rendered (for testing purposes I only have a grid extending for 4000.0f, while zFar is set to 1000.0f). When I start moving the camera the scene starts redrawing itself. If I keep pressing the move keys, the CPU usage will increase; this is normal behavior. It drops back when I stop moving. Unless I'm missing something, it seems like a good approach for now. I did find this interesting article on iDevGames and this implementation is probably affected by the problem described on that article. What's your thoughts on that? Please note that I'm just doing this for fun, I have no intentions of creating some game to distribute or something like that, not in the near future at least. If I did, I would probably go with something else besides GLUT. But since I'm using GLUT, and other than the problem described on iDevGames, do you think this latest implementation is sufficient for GLUT? The only real issue I can think of right now is that I'll need to keep calling glutPostRedisplay() every time the scene changes something and keep calling it until there's nothing new to redraw. A little complexity added to the code for a better cause, I think. What do you think?

    Read the article

  • How do I work around sudo 'segmentation fault' on basic bash commands?

    - by sage
    I am sure the answers are out there, but alas there are too many answers (here and elsewhere) to other questions stopping me from finding them. I just encountered something substantially similar to what is described at the closed SO question, sudo : “segmentation fault” Ubuntu maverick [closed]. My team is using Ubuntu 11.04 on VMWare Workstation 8.0.4. We are doing development using c++, Xenomai, Qt, and Qt Creator. When we simulate our application on the virtual machine, we currently need to launch Qt Creator with sudo. My colleague mentioned today that he has been having issues where his workstation locks up and he needs to restart and that occasionally he has the issue that all sudo bash commands return "segmentation fault". I just ran our application in simulation mode. I was running Qt Creator under sudo and Qt Creator received the signal abort (if I recall). Afterward, every command executed with sudo from sudo qtcreator to sudo ls resulted in the message Segmentation fault. I clicked on the power widget to see if I could log out, but the system shut down straightaway without prompting. My understanding is that we run sudo because of a permissions issue with Xenomai and the VM as currently configured, but my colleague has a workaround for this. I expect that not running Qt Creator under sudo -- something that has always made me nervous -- will help contain this issue, but I find it troubling that this could happen and manifest as it does. Does anyone know what is happening? Any recommendations on how to work around this issue? This is happening often to I am trying tolobby for VM changes to be able to run the process without sudo.

    Read the article

  • Top 10 Posts in 2010

    - by dwahlin
    Blogging’s a lot of fun and a great way to share what you’ve learned. It’s also a great way to learn based upon comments people leave that help you see things in an entirely new way in some cases.  Since we’ve now moved on to 2011 (Happy New Year’s!) I wanted to list the Top 10 posts from my blog during 2010 based on individual views.  Thanks to everyone who follows my blog and adds comments from time to time. Here’s wishing everyone a great 2011!   1. Reducing Code by Using jQuery Templates 2. Integrating HTML into Silverlight Applications 3. Silverlight is Dead, the Moon is Made of Cheese, and HTML 5 is Ready for Prime Time 4. Understanding the Role of Commanding in Silverlight 4 Applications 5. New Article – Getting Started with WCF RIA Services 6. Simplify Your Code with LINQ 7. My Favorite iPad Apps….So Far 8. Final Release of Silverlight Tools for Visual Studio 2010 Released 9. Handling WCF Service Paths in Silverlight 4 – Relative Path Support 10. Tales from the Trenches – Building a Real-World Silverlight Line of Business Application   Getting Started with the MVVM Pattern in Silverlight Applications – Posted late 2009 so I’m giving it honorable mention status since it’s still one of the most popular posts.

    Read the article

  • .NET development on Macs

    - by Jeff
    I posted the “exciting” conclusion of my laptop trade-ins and issues on my personal blog. The links, in chronological order, are posted below. While those posts have all of the details about performance and software used, I wanted to comment on why I like using Macs in the first place. It started in 2006 when Apple released the first Intel-based Mac. As someone with a professional video past, I had been using Macs on and off since college (1995 graduate), so I was never terribly religious about any particular platform. I’m still not, but until recently, it was staggering how crappy PC’s were. They were all plastic, disposable, commodity crap. I could never justify buying a PowerBook because I was a Microsoft stack guy. When Apple went Intel, they removed that barrier. They also didn’t screw around with selling to the low end (though the plastic MacBooks bordered on that), so even the base machines were pretty well equipped. Every Mac I’ve had, I’ve used for three years. Other than that first one, I’ve also sold each one, for quite a bit of money. Things have changed quite a bit, mostly within the last year. I’m actually relieved, because Apple needs competition at the high end. Other manufacturers are finally understanding the importance of industrial design. For me, I’ll stick with Macs for now, because I’m invested in OS X apps like Aperture and the Mac versions of Adobe products. As a Microsoft developer, it doesn’t even matter though… with Parallels, I Cmd-Tab and I’m in Windows. So after three and a half years with a wonderful 17” MBP and upgraded SSD, it was time to get something lighter and smaller (traveling light is critical with a toddler), and I eventually ended up with a 13” MacBook Air, with the i7 and 8 gig upgrades, and I love it. At home I “dock” it to a Thunderbolt Display. A new laptop .NET development on a Retina MacBook Pro with Windows 8 Returning my MacBook Pro with Retina display .NET development on a MacBook Air with Windows 8

    Read the article

  • Need help in determing what, if any, tools can be used to create a free Flash game

    - by ReaperOscuro
    Yes I proudly -and sadly- declare that I am a complete nincompoop when it comes to Flash, and I have been fishing around the big wide web for information. The reason for this is that I have been contracted to create a game(s) for a website -the usual flash-based games caveat. Please I do not mean things like by those gaming generator websites, I mean small yet professional games- but the caveat, as always, is that impossible dream: it needs to be done all for free. The budget...well imagine it as not there. Annoyingly is that I am a game designer yes, but with a ridiculously tight deadline I haven't got much time to re-learn (ah the heady days of programming at uni) everything by the end of March, so I'd like to ask some people who know their stuff rather than keep looking at a gazillion different things. This is my understanding: with the flash sdk you can create a game, albeit you need to be pretty programming savvy. FlashDevelop helps there -yet I am not entirely sure how. Yet even FD says to use Flash for the animation/graphics. Yes its undeniably powerful but as I said there is the unattainable demand of no money. The million dollar question: what, if any, tools can I use to create a free flash game?

    Read the article

  • Should I give the answer to a failed interview coding exercise?

    - by GlenH7
    We had a senior level interview candidate fail a nuance of the FizzBuzz question*. I mean, really, utterly, completely, failed the question - not even close. I even coached him through to thinking about using a loop and that 3 and 5 were really worth considering as special cases. He blew it. Just for QA purposes, I gave the same exact question to three teammates; gave them 5 minutes; and then came back to collect their pseudo-code. All of them nailed it and hadn't seen the question before. Two asked what the trick was... On a different logic exercise, the candidate showed some understanding of some of the features available within the language he chose to use (C#). So it's not as if he had never written a line of code. But his logic still stunk. My question is whether or not I should have given him the answer to the logic questions. He knew he blew them, and acknowledged it later in the interview. On the other hand, he never asked for the answer or what I was expecting to see. I know coding exercises can be used to set candidates up for failure (again, see second link from above). And I really tried to help him home in on answering the core of the question. But this was a senior level candidate and Fizz-Buzz is, frankly, ridiculously easy even with accounting for interview jitters. I felt like I should have shown him a way of solving the problem so that he could at least learn from the experience. But again, he didn't ask. What's the right way to handle that situation? *Okay, that's not the link to the actual FizzBuzz question, but it is a good P.SE discussion around FizzBuzz and links to the various aspects of it.

    Read the article

  • New Study Guide: "Oracle Solaris 11 System Administration"

    - by Harold Green
    A new helpful resource for Solaris 11 exam preparation has just been released. "Oracle Solaris 11 System Administration" by author and educator Bill Calkins covers effective installation and administration of an Oracle Solaris 11 system. In addition to being a valuable, comprehensive study guide, the book also serves as a complete reference guide for the everyday tasks of an Oracle Solaris System Administrator. This book can be a valuable addition to your preparation for the Oracle Solaris 11 Advanced System Administration (1Z1-822) certification exam. This exam, combined with the Oracle Certified Associate, Oracle Solaris 11 System Administrator (OCA) certification and a training requirement will earn you the Oracle Certified Professional, Oracle Solaris 11 System Administrator (OCP) certification. This valuable credential is designed for Oracle Solaris System Administrators with a strong foundation in the Oracle Solaris 11 Operating System as well as a fundamental understanding of the UNIX operating system, commands and utilities. This certification covers topics on core elements such as: configuring network interfaces, managing swap configurations, crash dumps, and core files. The 822 exam is currently in beta at the greatly discounted rate of $50 USD, but the beta period will soon be closing (likely the end of this month/June 2013), so be take advantage of the opportunity to be one of the first to hold this new certification.  Bill Calkins also recently posted some tips for taking Oracle Solaris 11 certification exams.

    Read the article

  • Gap in parallaxing background loop

    - by CinetiK
    The bug here is that my background kind of offset a bit itself from where it should draw and so I have this line. I have some troubles understanding why I get this bug when I set a speed that is different then 1,2,4,8,16,... In main class I set the speed depending on the player speed bgSpeed = -(int)playerMoveSpeed.X / 10; and here's my background class class ParallaxingBackground { Texture2D texture; Vector2[] positions; public int Speed { get; set;} public void Initialize(ContentManager content, String texturePath, int screenWidth, int speed) { texture = content.Load<Texture2D>(texturePath); this.Speed = speed; positions = new Vector2[screenWidth / texture.Width + 2]; for (int i = 0; i < positions.Length; i++) { positions[i] = new Vector2(i * texture.Width, 0); } } public void Update() { for (int i = 0; i < positions.Length; i++) { positions[i].X += Speed; if (Speed <= 0) { if (positions[i].X <= -texture.Width) { positions[i].X = texture.Width * (positions.Length - 1); } } else { if (positions[i].X >= texture.Width*(positions.Length - 1)) { positions[i].X = -texture.Width; } } } } public void Draw(SpriteBatch spriteBatch) { for (int i = 0; i < positions.Length; i++) { spriteBatch.Draw(texture, positions[i], Color.White); } } }

    Read the article

  • Subsumption architecture vs. perceptual control theory

    - by Yasir G.
    I'm a new person to AI field and I have to research and compare 2 different architectures for a thesis I'm writing. Before you scream (homework thread), I've been reading on these 2 topics only to find that I'm confusing myself more.. let me first start with stating briefly what I know so far. Subsumption is based on the fact that targets of a system are different in sophistication, thus that requires them to be added as layers, each layer can suppress (modify) the command of the layers below it, and there are inhibitors to stop signals from execution lets say. PCT stresses on the fact that there are nodes to handle environmental changes (negative feedback), so the inputs coming from an environment go through a comparator node and then an action is generated by that node, HPCT or (Hierarchical PCT) is based on nesting these cycles inside each other so a small cycle to avoid crashing would be nested in a more sophisticated cycle that targets a certain location for example. My questions, am I getting this the right way? am I missing any critical understanding about these 2 models? also any idea where I can find simplified explanations for each theory (so far been struggling trying to understand the papers from Google scholar :< ) /Y

    Read the article

  • KERPOOOOW!

    - by Matt Christian
    Recently I discovered the colorful world of comic books.  In the past I've read comics a few times but never really got into them.  When I wanted to start a collection I decided either video games or comics yet stayed away from comics because I am less familiar with them. In any case, I stopped by my local comic shop and picked up a few comics and a few trade paperbacks.  After reading them and understanding their basic flow I began to enjoy not only the stories but the art styles hiding behind those little white bubbles of text (well, they're USUALLY white).  My first stop at the comic store I ended up with: - Nemesis #1 (cover A) - Shuddertown #1 (cover A I think) - Daredevil: King of Hell's Kitchen Trade Paperback - Peter Parker: Spiderman - One Small Break Trade Paperback It took me about 3-4 days to read all of that including re-reading the single issues and glancing over the beginning of Daredevil again.  After a week of looking around online I knew a little more about the comics I wanted to pick up and the kind of art style I enjoyed.  While Peter Parker: Spiderman was ok, I really enjoyed the detailed, realistic look of Daredevil and Shuddertown. Now, a few years back I picked up the game The Darkness for PS3.  I knew it was based off a comic but never read the comic.  I decided I'd pick up a few issues of it and ended up with: - The Darkness #80 (cover A) - The Darkness #81 (cover A) - The Darkness #82 (cover A) - The Darkness #83 (cover A) - The Darkness Shadows and Flame #1  (one-shot; cover A) - The Darkness Origins: Volume 1 Trade Paperback (contains The Darkness #1-6) - New Age boards and bags for storing my comics The Darkness is relatively good though jumping from issue #6 to issue #80 I lost a bit on who the enemy in the current series is.  I think out of all of them, issue #83 was my favorite of them. I'm signed up at the local shop to continue getting Nemesis, The Darkness, and Shuddertown, and I'll probably pick up a few different ones this weekend...

    Read the article

  • Dealing with 2D pixel shaders and SpriteBatches in XNA 4.0 component-object game engine?

    - by DaveStance
    I've got a bit of experience with shaders in general, having implemented a couple, very simple, 3D fragment and vertex shaders in OpenGL/WebGL in the past. Currently, I'm working on a 2D game engine in XNA 4.0 and I'm struggling with the process of integrating per-object and full-scene shaders in my current architecture. I'm using a component-entity design, wherein my "Entities" are merely collections of components that are acted upon by discreet system managers (SpatialProvider, SceneProvider, etc). In the context of this question, my draw call looks something like this: SceneProvider::Draw(GameTime) calls... ComponentManager::Draw(GameTime, SpriteBatch) which calls (on each drawable component) DrawnComponent::Draw(GameTime, SpriteBatch) The SpriteBatch is set up, with the default SpriteBatch shader, in the SceneProvider class just before it tells the ComponentManager to start rendering the scene. From my understanding, if a component needs to use a special shader to draw itself, it must do the following when it's Draw(GameTime, SpriteBatch) method is invoked: public void Draw(GameTime gameTime, SpriteBatch spriteBatch) { spriteBatch.End(); spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, null, null, null, EffectShader, ViewMatrix); // Draw things here that are shaded by the "EffectShader." spriteBatch.End(); spriteBatch.Begin(/* same settings that were set by SceneProvider to ensure the rest of the scene is rendered normally */); } My question is, having been told that numerous calls to SpriteBatch.Begin() and SpriteBatch.End() within a single frame was terrible for performance, is there a better way to do this? Is there a way to instruct the currently running SpriteBatch to simply change the Effect shader it is using for this particular draw call and then switch it back before the function ends?

    Read the article

  • Kent .Net/SqlServer User Group – Upcoming events

    - by Dave Ballantyne
    At the Kent user group we have two upcoming events.  Both are to be held at F-Keys Training suite http://f-keys.co.uk/ in Rochester, Kent. If you haven’t attended before please note the location here. 14-June Is your code S.O.L.I.D ? Nathan Gloyn Everybody keeps on about SOLID principles but what are they? and why should you care? This session is an introduction to SOLID and I'll aim to walk through each principle telling you about that principle and then show how a code base can be refactored using the principles to make your life easier, Come the end of the session you should have a basic understanding of the principle, why to use it and how using it can improve your code. Building composite applications with OpenRasta 3 Sebastien Lambla A wave of change is coming to Web development on .NET. Packaging technologies are bringing dependency management to .NET for the first time, streamlining development workflow and creating new possibilities for deployment and administration. The sky's the limit, and in this session we'll explore how open frameworks can help us leverage composition for the web. Register here for this event http://www.eventbrite.com/event/1643797643 05-July Tony Rogerson Achieving a throughput of 1.5Terabytes or over 92,000 8Kbyte of 100% random reads per second on kit costing less that 2.5K, and of course what to do with it! The session will focus on commodity kit and how it can be used within business to provide massive performance benefits at little cost. End to End Report Creation and Management using SQL Server Reporting Services  Chris Testa-O'NeillThis session will walk through the authoring, management and delivery of reports with a focus on the new features of Reporting Services 2008 R2. At the end of this session you will understand how to create a report in the new report designer. Be aware of the Report management options available and the delivery mechanisms that can be used to deliver reports. Register here for this event http://www.eventbrite.com/event/1643805667 Hope to see you at one or other ( or even both if you are that way inclined).

    Read the article

  • C++ and function pointers assessment: lack of inspiration

    - by OlivierDofus
    I've got an assessment to give to my students. It's about C++ and function pointers. Their skill is middle: it the first year of a programming school after bachelor. To give you something precise, here's a sample of a solution of one of 3 exercices they had to do in 30 minutes (the question was: "here's a version of a code that could be written with function pointers, write down the same thing but with function pointers"): typedef void (*fcPtr) (istream &); fcPtr ArrayFct [] = { Delete , Insert, Swap, Move }; void HandleCmd (const string && Cmd) { string AvalaibleCommands ("DISM"); string::size_type Pos; istringstream Flux (Cmd); char CodeOp; Flux >> CodeOp; Pos = AvalaibleCommands.find (toupper (CodeOp)); if (Pos != string::npos) { ArrayFct [Pos](Flux); } } Any idea where I could find some inspiration? Some of the students have understood the principles, even though it's very hard for them to write C++ code. I know them, I know they're clever, and I'm pretty sure they should be very good project managers. So, writing C++ code is not that important after all. Understanding is the most important part (IMHO). I'm wondering about maybe break the habits, and give half of the questions about the principle, or even better, give some sample in other language and ask them why it's better to use function pointers instead of classical programming (usually a big switch case). Any idea where I could look? Find some inspiration?

    Read the article

  • SQL Azure: Notes on Building a Shard Technology

    - by Herve Roggero
    In Chapter 10 of the book on SQL Azure (http://www.apress.com/book/view/9781430229612) I am co-authoring, I am digging deeper in what it takes to write a Shard. It's actually a pretty cool exercise, and I wanted to share some thoughts on how I am designing the technology. A Shard is a technology that spreads the load of database requests over multiple databases, as transparently as possible. The type of shard I am building is called a Vertical Partition Shard  (VPS). A VPS is a mechanism by which the data is stored in one or more databases behind the scenes, but your code has no idea at design time which data is in which database. It's like having a mini cloud for records instead of services. Imagine you have three SQL Azure databases that have the same schema (DB1, DB2 and DB3), you would like to issue a SELECT * FROM Users on all three databases, concatenate the results into a single resultset, and order by last name. Imagine you want to ensure your code doesn't need to change if you add a new database to the shard (DB4). Now imagine that you want to make sure all three databases are queried at the same time, in a multi-threaded manner so your code doesn't have to wait for three database calls sequentially. Then, imagine you would like to obtain a breadcrumb (in the form of a new, virtual column) that gives you a hint as to which database a record came from, so that you could update it if needed. Now imagine all that is done through the standard SqlClient library... and you have the Shard I am currently building. Here are some lessons learned and techniques I am using with this shard: Parellel Processing: Querying databases in parallel is not too hard using the Task Parallel Library; all you need is to lock your resources when needed Deleting/Updating Data: That's not too bad either as long as you have a breadcrumb. However it becomes more difficult if you need to update a single record and you don't know in which database it is. Inserting Data: I am using a round-robin approach in which each new insert request is directed to the next database in the shard. Not sure how to deal with Bulk Loads just yet... Shard Databases:  I use a static collection of SqlConnection objects which needs to be loaded once; from there on all the Shard commands use this collection Extension Methods: In order to make it look like the Shard commands are part of the SqlClient class I use extension methods. For example I added ExecuteShardQuery and ExecuteShardNonQuery methods to SqlClient. Exceptions: Capturing exceptions in a multi-threaded code is interesting... but I kept it simple for now. I am using the ConcurrentQueue to store my exceptions. Database GUID: Every database in the shard is given a GUID, which is calculated based on the connection string's values. DataTable. The Shard methods return a DataTable object which can be bound to objects.  I will be sharing the code soon as an open-source project in CodePlex. Please stay tuned on twitter to know when it will be available (@hroggero). Or check www.bluesyntax.net for updates on the shard. Thanks!

    Read the article

  • Multitenant Design for SQL Azure: White Paper Available

    - by Herve Roggero
    Cloud computing is about scaling out all your application tiers, from web application to the database layer. In fact, the whole promise of Azure is to pay for just what you need. You need more IIS servers? No problemo... just spin another web server. You expect to double your storage needs for Azure Tables? No problemo; you are covered there too... just pay for your storage needs. But what about the database tier, SQL Azure? How do you add new databases easily, and transparently, so that your application simply uses more of SQL Azure if its needs to? Without changing a single line of code? And what if you need to scale back down? Welcome to the world of database scalability. There are many terms that describe database scalability, including data federation, multitenant designs, and even NoSQL depending on the technical solution you are implementing.  Because SQL Azure is a transactional database system, NoSQL is not really an option. However data federation and multitenant designs offer some very interesting scalability options that are worth considering. Data federation, a feature of SQL Azure that will be offered in the future, offers very interesting capabilities available natively on the SQL Azure platform. More to come in a few weeks... Multitenant designs on the other hand are design practices and technologies designed to help you reach flexible scalability options not available otherwise. The first incarnation of such a method was made available on CodePlex as an open source project (http://enzosqlshard.codeplex.com).  This project was an attempt to provide a sharding library for educational purposes.  All that sounds really cool... and really esoteric... almost a form of database "voodoo"... However after being on multiple Azure projects I am starting to see a real need. Customers want to be able to free themselves from the database tier, so that if they have 10 new customers tomorrow, all they need to do is add 2 more SQL Azure instances. It's that simple. How you achieve this, and suggested application design guidelines, are available in a white paper I just published.  The white paper offers two primary sections. The first section describes the business and technical problem at hand, and how to classify it according to specific design patterns. For example, I discuss compressed shards through schema separation. The second section offers a method for addressing the needs of a multitenant design using a new library, the big bother of the codeplex project mentioned previously (that I created earlier this year), complete with management interface and such. A Beta of this platform will be made available within weeks; as soon as the documentation will be ready.   I would like to ask you to drop me a quick email at [email protected] if you are going to download the white paper. It's not required, but it would help me get in touch with you for feedback.  You can download this white paper here:   http://www.bluesyntax.net/files/EnzoFramework.pdf . Thank you, and I am looking for feedback, thoughts and implementation opportunities.

    Read the article

  • How can I best manage making open source code releases from my company's confidential research code?

    - by DeveloperDon
    My company (let's call them Acme Technology) has a library of approximately one thousand source files that originally came from its Acme Labs research group, incubated in a development group for a couple years, and has more recently been provided to a handful of customers under non-disclosure. Acme is getting ready to release perhaps 75% of the code to the open source community. The other 25% would be released later, but for now, is either not ready for customer use or contains code related to future innovations they need to keep out of the hands of competitors. The code is presently formatted with #ifdefs that permit the same code base to work with the pre-production platforms that will be available to university researchers and a much wider range of commercial customers once it goes to open source, while at the same time being available for experimentation and prototyping and forward compatibility testing with the future platform. Keeping a single code base is considered essential for the economics (and sanity) of my group who would have a tough time maintaining two copies in parallel. Files in our current base look something like this: > // Copyright 2012 (C) Acme Technology, All Rights Reserved. > // Very large, often varied and restrictive copyright license in English and French, > // sometimes also embedded in make files and shell scripts with varied > // comment styles. > > > ... Usual header stuff... > > void initTechnologyLibrary() { > nuiInterface(on); > #ifdef UNDER_RESEARCH > holographicVisualization(on); > #endif > } And we would like to convert them to something like: > // GPL Copyright (C) Acme Technology Labs 2012, Some rights reserved. > // Acme appreciates your interest in its technology, please contact [email protected] > // for technical support, and www.acme.com/emergingTech for updates and RSS feed. > > ... Usual header stuff... > > void initTechnologyLibrary() { > nuiInterface(on); > } Is there a tool, parse library, or popular script that can replace the copyright and strip out not just #ifdefs, but variations like #if defined(UNDER_RESEARCH), etc.? The code is presently in Git and would likely be hosted somewhere that uses Git. Would there be a way to safely link repositories together so we can efficiently reintegrate our improvements with the open source versions? Advice about other pitfalls is welcome.

    Read the article

  • Unit Tests as a learning tool - a good idea?

    - by Ekkehard.Horner
    I'm interested in ways and means for learning (a) programming language(s) efficiently. I believe that using Unit Test concepts and infrastructure early in that process is a good thing, even better than starting with "Hello world". Why: To write a decent program even for a toy/restricted problem in a new language, you'll have to master many heterogenous concepts (control flow & variables & IO ...), you are tempted to glance over details just to get your program 'to work'. Putting (your understanding of) the facts about the new language in assertions with good descriptions (=success messages) enforces thinking thru/clearness/precision. Grouping topics and adding assertions to such groups is much easier than incorporation features from the 2. chapter of your "Learning X" book to your chapter 1 program. Why not: 'Real' Unit Tests are meant to output "1234 tests ok; 1 failure: saveWorld() chokes on negative input"; 'didactic' Unit Tests should output relevant facts about the new language like perl6 10-string.t # ### p5chop ... ok 13 - p5chop( "cbä" ) returns "ä" ok 14 - after that, victim is changed to "cb" # ### (p6) chop ... ok 27 - (p6) chop( "cbä" ) returns chopped copy: "cb" ok 18 - after that, victim is unchanged: "cbä" # ### chomp ... So (mis?)using Unit Tests may be counterproductive - practicing actions while learning you wouldn't use professionally. How: Writing 'didactic' Unit Tests in languages with lightweight testing systems (Perl 5/6) is easy; (mis?)using more elaborate systems (JUnit, CppUnit) may be not worth the effort or not suitable for a person just starting with a new language. So Is using Unit Tests as a learning tool a bad idea? Can the Unit Test tool(s) of your favourite language(s) used didactically? Should implementation details (eventually) be discussed here or over at stackoverflow.com?

    Read the article

  • Has anyone else read "Programming video games for the Evil Genius"

    - by Martin
    I bought this book called "Programming Video Games for the Evil Genius" by Ian Cinnamon. If there is anyone who has read or is familiar with this book I am wondering if they think it is worth reading. I am interested in making video games. I have already taken intro courses in C++, Java and Python and got through okay. I've been going through this book for about a month now(SLOWLY). All I have to do is type the code exactly in the book, BUT a lot of the code is not clearly explained. I do some research online but I usually still have some trouble answering my questions. Then I found stack overflow. It's been a ton of help. Right now I am trying to make a racing game right out of this book and I got to a point where the author left a bunch of errors in his code. One of the members of this website fixed it up for me, but added some stuff that I'm having trouble understanding. I spend more time trying to figure out the authors errors and fix them or get someone to help me fix them than I actually do learning code. I REALLY want to learn how to do this and I am ready and willing to put in the time, but I'm not sure if my time would be better spent learning from a different source. Are there any veterans out there that are familiar with this book and think it's worth it/not worth it? Should I try to move onto another book? Any advice for a fresh start for someone who wants to learn some video game programming?

    Read the article

  • Skanska Builds Global Workforce Insight with Cloud-Based HCM System

    - by HCM-Oracle
    By David Baum - Originally posted on Profit Peter Bjork grew up building things. He started his work life learning all sorts of trades at his father’s construction company in the northern part of Sweden. So in college, it was natural for him to pursue a bachelor’s degree in construction engineering—but he broke new ground when he added a master’s degree in finance to his curriculum vitae. Written on a traditional résumé, Bjork’s current title (vice president of information systems strategies) doesn’t reveal the diversity of his experience—that he’s adept with hammer and nails as well as rows and columns. But a big part of his current job is to work with his counterparts in human resources (HR) designing, building, and deploying the systems needed to get a complete view of the skills and potential of Skanska’s 22,000-strong white-collar workforce. And Bjork believes that complete view is essential to Skanska’s success. “Our business is really all about people,” says Bjork, who has worked with Skanska for 16 years. “You can have equipment and financial resources, but to truly succeed in a business like ours you need to have the right people in the right places. That’s what this system is helping us accomplish.” In a global HR environment that suffers from a paradox of high unemployment and a scarcity of skilled labor, managers need to have a complete understanding of workforce capabilities to develop management skills, recruit for open positions, ensure that staff is getting the training they need, and reduce attrition. Skanska’s human capital management (HCM) systems, based on Oracle Talent Management Cloud, play a critical role delivering that understanding. “Skanska’s philosophy of having great people, encouraging their development, and giving them the chance to move across business units has nurtured a culture of collaboration, but managing a diverse workforce spread across the globe is a monumental challenge,” says Annika Lindholm, global human resources system owner in the HR department at Skanska’s headquarters just outside of Stockholm, Sweden. “We depend heavily on Oracle’s cloud technology to support our HCM function.” Construction, Workers For Skanska’s more than 60,000 employees and contractors, managing huge construction projects is an everyday job. Beyond erecting signature buildings, management’s goal is to build a corporate culture where valuable talent can be sought out and developed, bringing in the right mix of people to support and grow the business. “Of all the companies in our space, Skanska is probably one of the strongest ones, with a laser focus on people and people development,” notes Tom Crane, chief HR and communications officer for Skanska in the United States. “Our business looks like equipment and material, but all we really have at the end of the day are people and their intellectual capital. Without them, second only to clients, of course, you really can’t achieve great things in the high-profile environment in which we work.” During the 1990s, Skanska entered an expansive growth phase. A string of successful acquisitions paved the way for the company’s transformation into a global enterprise. “Today the company’s focus is on profitable growth,” continues Crane. “But you can’t really achieve growth unless you are doing a very good job of developing your people and having the right people in the right places and driving a culture of growth.” In the United States alone, Skanska has more than 8,000 employees in four distinct business units: Skanska USA Building, also known as the Construction Manager, builds everything at ground level and above—hospitals, educational facilities, stadiums, airport terminals, and other massive projects. Skanska USA Civil does everything at ground level and below, such as light rail, water treatment facilities, power plants or power industry facilities, highways, and bridges. Skanska Infrastructure Development develops public-private partnerships—projects in which Skanska adds equity and also arranges for outside financing. Skanska Commercial Development acts like a commercial real estate developer, acquiring land and building offices on spec or build-to-suit for its clients. Skanska's international portfolio includes construction of the new Meadowlands Stadium. Getting the various units to operate collaboratatively helps Skanska deliver high value to clients and shareholders. “When we have this collaboration among units, it allows us to enrich each of the business units and, at the same time, develop our future leaders to be more facile in operating across business units—more accepting of a ‘one Skanska’ approach,” explains Crane. Workforce Worldwide But HR needs processes and tools to support managers who face such business dynamics. Oracle Talent Management Cloud is helping Skanska implement world-class recruiting strategies and generate the insights needed to drive quality hiring practices, internal mobility, and a proactive approach to building talent pipelines. With their new cloud system in place, Skanska HR leaders can manage everything from recruiting, compensation, and goal and performance management to employee learning and talent review—all as part of a single, cohesive software-as-a-service (SaaS) environment. Skanska has successfully implemented two modules from Oracle Talent Management Cloud—the recruiting and performance management modules—and is in the process of implementing the learn module. Internally, they call the systems Skanska Recruit, Skanska Talent, and Skanska Learn. The timing is apropos. With high rates of unemployment in recent years, there have been many job candidates on the market. However, talent scarcity continues to frustrate recruiters. Oracle Taleo Recruiting Cloud Service, one of the applications in the Oracle Talent Management cloud portfolio, enables Skanska managers to create more-intelligent recruiting strategies, pulling high-performer profile statistics to create new candidate profiles and using multitiered screening and assessments to ensure that only the best-suited candidate applications make it to the recruiter’s desk. Tools such as applicant tracking, interview management, and requisition management help recruiters and hiring managers streamline the hiring process. Oracle’s cloud-based software system automates and streamlines many other HR processes for Skanska’s multinational organization and delivers insight into the success of recruiting and talent-management efforts. “The Oracle system is definitely helping us to construct global HR processes,” adds Bjork. “It is really important that we have a business model that is decentralized, so we can effectively serve our local markets, and interact with our global ERP [enterprise resource planning] systems as well. We would not be able to do this without a really good, well-integrated HCM system that could support these efforts.” A key piece of this effort is something Skanska has developed internally called the Skanska Leadership Profile. Core competencies, on which all employees are measured, are used in performance reviews to determine weak areas but also to discover talent, such as those who will be promoted or need succession plans. This global profiling system brings consistency to the way HR professionals evaluate and review talent across the company, with a consistent set of ratings and a consistent definition of competencies. All salaried employees in Skanska are tied to a talent management process that gives opportunity for midyear and year-end reviews. Using the performance management module, managers can align individual goals with corporate goals; provide clear visibility into how each employee contributes to the success of the organization; and drive a strategic, end-to-end talent management strategy with a single, integrated system for all talent-related activities. This is critical to a company that is highly focused on ensuring that every employee has a development plan linked to his or her succession potential. “Our approach all along has been to deploy software applications that are seamless to end users,” says Crane. “The beauty of a cloud-based system is that much of the functionality takes place behind the scenes so we can focus on making sure users can access the data when they need it. This model greatly improves their efficiency.” The employee profile not only sets a competency baseline for new employees but is also integrated with Skanska’s other back-office Oracle systems to ensure consistency in the way information is used to support other business functions. “Since we have about a dozen different HR systems that are providing us with information, we built a master database that collects all the information,” explains Lindholm. “That data is sent not only to Oracle Talent Management Cloud, but also to other systems that are dependent on this information.” Collaboration to Scale Skanska is poised to launch a new Oracle module to link employee learning plans to the review process and recruitment assessments. According to Crane, connecting these processes allows Skanska managers to see employees’ progress and produce an updated learning program. For example, as employees take classes, supervisors can consult the Oracle Talent Management Cloud portal to monitor progress and align it to each individual’s training and development plan. “That’s a pretty compelling solution for an organization that wants to manage its talent on a real-time basis and see how the training is working,” Crane says. Rolling out Oracle Talent Management Cloud was a joint effort among HR, IT, and a global group that oversaw the worldwide implementation. Skanska deployed the solution quickly across all markets at once. In the United States, for example, more than 35 offices quickly got up to speed on the new system via webinars for employees and face-to-face training for the HR group. “With any migration, there are moments when you hold your breath, but in this case, we had very few problems getting the system up and running,” says Crane. Lindholm adds, “There has been very little resistance to the system as users recognize its potential. Customizations are easy, and a lasting partnership has developed between Skanska and Oracle when help is needed. They listen to us.” Bjork elaborates on the implementation process from an IT perspective. “Deploying a SaaS system removes a lot of the complexity,” he says. “You can downsize the IT part and focus on the business part, which increases the probability of a successful implementation. If you want to scale the system, you make a quick phone call. That’s all it took recently when we added 4,000 users. We didn’t have to think about resizing the servers or hiring more IT people. Oracle does that for us, and they have provided very good support.” As a result, Skanska has been able to implement a single, cost-effective talent management solution across the organization to support its strategy to recruit and develop a world-class staff. Stakeholders are confident that they are providing the most efficient recruitment system possible for competent personnel at all levels within the company—from skilled workers at construction sites to top management at headquarters. And Skanska can retain skilled employees and ensure that they receive the development opportunities they need to grow and advance.

    Read the article

  • What is the reason for section 1 of LGPL and what is the implication for section 9.

    - by Roland Schulz
    Why was section 1 added to LGPLv3? My understanding of section 3&4 is, one can convey the combined work under any license and with no requirements from GPLv3 (besides those explicitly stated as requirements in LGPLv3 3&4). Given that, why is section 1 necessary. Wouldn't that sections 3&4 by themselves already imply anyhow what section 1 explicitly states? I assume that I'm missing something and section 1 isn't redundant. Assuming that, does this have implications for other sections in GPLv3? E.g. does conveying a covered work under sections 3&4 fall under the patent clause of section 10 of GPLv3? Why does section 1 not also state an exception for section 10? Put another way. Is the Eigen FAQ correct by stating: LGPL requires [for header only libraries] pretty much the same as the 2-clause BSD license. It it true that for conveying object files including material from LGPLv3 headers no GPLv3 patent clauses apply?

    Read the article

  • Does command/query separation apply to a method that creates an object and returns its ID?

    - by Gilles
    Let's pretend we have a service that calls a business process. This process will call on the data layer to create an object of type A in the database. Afterwards we need to call again on another class of the data layer to create an instance of type B in the database. We need to pass some information about A for a foreign key. In the first method we create an object (modify state) and return it's ID (query) in a single method. In the second method we have two methods, one (createA) for the save and the other (getId) for the query. public void FirstMethod(Info info) { var id = firstRepository.createA(info); secondRepository.createB(id); } public void SecondMethod(Info info) { firstRepository.createA(info); var key = firstRepository.getID(info); secondRepository.createB(key); } From my understanding the second method follows command query separation more fully. But I find it wasteful and counter-intuitive to query the database to get the object we have just created. How do you reconcile CQS with such a scenario? Does only the second method follow CQS and if so is it preferable to use it in this case?

    Read the article

  • NoVa Code Camp 2010.1 &ndash; Don&rsquo;t Miss It!

    - by John Blumenauer
    Tomorrow, June 12th will be the NoVa Code Camp 2010.1 held at the Microsoft Technical Center in Reston, VA.  What’s in store?  Lots of great topics by some truly knowledgeable speakers from the mid-Atlantic region.  This event will have four talks alone on Azure, plus sessions ASP.NET MVC2, SharePoint, WP7, Silverlight, MEF, WCF and some great presentations centered around best practices and design. The schedule can be found at:  http://novacodecamp.org/RecentCodeCamps/NovaCodeCamp201001/Schedule/tabid/202/Default.aspx The session descriptions and speaker list is at:  http://novacodecamp.org/RecentCodeCamps/NovaCodeCamp201001/Sessions/tabid/197/Default.aspx We’re also fortunate this year to have several excellent sponsors.  The sponsor list can be found at:  http://novacodecamp.org/RecentCodeCamps/NovaCodeCamp201001/Sponsors/tabid/198/Default.aspx.  As a result of the excellent sponsors, attendees will be enjoying nice food throughout the day and the end of day raffle will have some great surprises regarding swag! I’ll be presenting MEF with an introduction and then how it can be used to extend Silverlight applications.  If you’re new to MEF and/or Silverlight, don’t worry.  I’ll be easing into the concepts so everyone will leave an understanding of MEF by the end of the session.   Don’t miss NoVa Code Camp 2010.1.  See YOU there!

    Read the article

  • How lookaheads are propagated in "channel" method of building LALR parser?

    - by greenoldman
    The method is described in Dragon Book, however I read about it in ""Parsing Techniques" by D.Grune and C.J.H.Jacobs". I start from my understanding of building channels for NFA: channels are built once, they are like water channels with current you "drop" lookahead symbols in right places (sources) of the channel, and they propagate with "current" when symbol propagates, there are no barriers (the only sufficient things for propagation are presence of channel and direction/current); i.e. lookahead cannot just die out of the blue Is that right? If I am correct, then eof lookahead should be present in all states, because the source of it is the start production, and all other production states are reachable from start state. How the DFA is made out of this NFA is not perfectly clear for me -- the authors of the mentioned book write about preserving channels, but I see no purpose, if you propagated lookaheads. If the channels have to be preserved, are they cut off from the source if the DFA state does not include source NFA state? I assume no -- the channels still runs between DFA states, not only within given DFA state. In the effect eof should still be present in all items in all states. But when you take a look at DFA presented in book (pdf is from errata): DFA for LALR (fig. 9.34 in the book, p.301) you will see there are items without eof in lookahead. The grammar for this DFA is: S -> E E -> E - T E -> T T -> ( E ) T -> n So how it was computed, when eof was dropped, and on what condition? Update It is textual pdf, so two interesting states (in DFA; # is eof): State 1: S--- >•E[#] E--- >•E-T[#-] E--- >•T[#-] T--- >•n[#-] T--- >•(E)[#-] State 6: T--- >(•E)[#-)] E--- >•E-T[-)] E--- >•T[-)] T--- >•n[-)] T--- >•(E)[-)] Arc from 1 to 6 is labeled (.

    Read the article

< Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >