Search Results

Search found 16987 results on 680 pages for 'second'.

Page 584/680 | < Previous Page | 580 581 582 583 584 585 586 587 588 589 590 591  | Next Page >

  • Bug in Delphi XE RegularExpressions Unit

    - by Jan Goyvaerts
    Using the new RegularExpressions unit in Delphi XE, you can iterate over all the matches that a regex finds in a string like this: procedure TForm1.Button1Click(Sender: TObject); var RegEx: TRegEx; Match: TMatch; begin RegEx := TRegex.Create('\w+'); Match := RegEx.Match('One two three four'); while Match.Success do begin Memo1.Lines.Add(Match.Value); Match := Match.NextMatch; end end; Or you could save yourself two lines of code by using the static TRegEx.Match call: procedure TForm1.Button2Click(Sender: TObject); var Match: TMatch; begin Match := TRegEx.Match('One two three four', '\w+'); while Match.Success do begin Memo1.Lines.Add(Match.Value); Match := Match.NextMatch; end end; Unfortunately, due to a bug in the RegularExpressions unit, the static call doesn’t work. Depending on your exact code, you may get fewer matches or blank matches than you should, or your application may crash with an access violation. The RegularExpressions unit defines TRegEx and TMatch as records. That way you don’t have to explicitly create and destroy them. Internally, TRegEx uses TPerlRegEx to do the heavy lifting. TPerlRegEx is a class that needs to be created and destroyed like any other class. If you look at the TRegEx source code, you’ll notice that it uses an interface to destroy the TPerlRegEx instance when TRegEx goes out of scope. Interfaces are reference counted in Delphi, making them usable for automatic memory management. The bug is that TMatch and TGroupCollection also need the TPerlRegEx instance to do their work. TRegEx passes its TPerlRegEx instance to TMatch and TGroupCollection, but it does not pass the instance of the interface that is responsible for destroying TPerlRegEx. This is not a problem in our first code sample. TRegEx stays in scope until we’re done with TMatch. The interface is destroyed when Button1Click exits. In the second code sample, the static TRegEx.Match call creates a local variable of type TRegEx. This local variable goes out of scope when TRegEx.Match returns. Thus the reference count on the interface reaches zero and TPerlRegEx is destroyed when TRegEx.Match returns. When we call MatchAgain the TMatch record tries to use a TPerlRegEx instance that has already been destroyed. To fix this bug, delete or rename the two RegularExpressions.dcu files and copy RegularExpressions.pas into your source code folder. Make these changes to both the TMatch and TGroupCollection records in this unit: Declare FNotifier: IInterface; in the private section. Add the parameter ANotifier: IInterface; to the Create constructor. Assign FNotifier := ANotifier; in the constructor’s implementation. You also need to add the ANotifier: IInterface; parameter to the TMatchCollection.Create constructor. Now try to compile some code that uses the RegularExpressions unit. The compiler will flag all calls to TMatch.Create, TGroupCollection.Create and TMatchCollection.Create. Fix them by adding the ANotifier or FNotifier parameter, depending on whether ARegEx or FRegEx is being passed. With these fixes, the TPerlRegEx instance won’t be destroyed until the last TRegEx, TMatch, or TGroupCollection that uses it goes out of scope or is used with a different regular expression.

    Read the article

  • As a person getting into mobile development, what's the best mobile platform in terms of profitability? [closed]

    - by Kyle Loman
    I realize this question can range very far so would love to hear any and all opinions on this. However, I'll be honest and say that I have been thinking of this in terms of most profitable. I know how this may sound either way but this is one of my main sticking points. I realize that I'm not guaranteed a single cent and success is never guaranteed but I'm going into this with the thought of making something out of it both financially and also for my own interest. I know that iOS gets a lot of attention on this front but Android commands a lot more market share. However, I know there are drawbacks to Android too, whether it's in the actual development process and programming (though I've heard conflicting reports on this, such as how easy/difficult it is for to address screen res in different devices) or the app ecosystem being flooded. But iOS's app ecosystem has been described as too saturated and harder to compete in for that reason. Since Windows Phone has fewer apps than both of those two, that might be the best place to start in order to be closer to the ground floor of the store and be noticed more? Less saturation = better chances of sales or differentiating? Something like the gold rush during the first years of the iOS App Store (not exactly but at least in concept)? Would it be that despite fewer users on the platform, there's more exposure due to less competition so that may translate to better success at sales? Plus, I know MS is in it for the long haul so I'm not too fearful of something like WebOS going away. Obviously RIM isn't very popular nowadays but I read a recent article that says Blackberry actually has the apps that make the most money, any thoughts on that: http://gigaom.com/mobile/which-mobile-oss-apps-make-most-money-surprise-its-blackberry/ Again, this is all I've heard or known about so if there's anything to add or correct here, please do. In addition, this has actually affected my next personal phone upgrade. I'm eligible for a carrier discount now and I've had my eye on the iPhone 5. However, the Lumia 920 is the one I'm holding out for and I'm open to trying an Android but I'm not sure I can wait that long for any new Nexus or even the Razr HD. Even the new Lumia in November is making me antsy, I'm so close to just getting an iPhone 5. But when I say this has affected my phone choice, I'd want to be able to carry the apps I write with me so that I'm able to pull my phone out to show people without having to carry around a second device to do so. So that's why I'd like to make my personal phone match the main platform I'm developing for. Of course, I will likely expand to other platforms if I gain any decent success but the one I target now would serve well as my personal phone I carry around so that I can use it as a marketing tool, in a sense, showing people my apps if the opportunity presents itself. So what's the best mobile platform to choose, and especially in regards to most lucrative? As said previously, this would influence my personal phone choice greatly. Thanks in advance and I hope this isn't taken the wrong way - I understand there are trade-offs and other factors that may balance this out but making some revenue is key among that. For some background, I have done software development and know programming language concepts so I'm not entirely new to it and I do get the notion of being familiar with these things so that I can translate this skill among a variety of languages but I'm currently just having difficulty choosing my first main mobile platform based on the criteria I've outlined above.

    Read the article

  • A story of Murphy&ndash;my technical issues at TechDays Switzerland #chtd

    - by Laurent Bugnion
    I had two sessions at the recent Swiss TechDays. While the first one (Advanced Development for Windows Phone 8) went extremely well (I think), I had a very annoying technical issue in the beginning of my second session. First let me add that I talked to Microsoft about that and I hope they will change a few things in the room assignment for next year. My two sessions were one right after the other, with only 15 minutes break to change room. I don’t mind having two sessions so close from each other, but I would really like them to be in the same room in order to avoid having to move my laptops (plural, that will become important later) and redoing the tech check. That being said, I am guilty of not checking where my talks would be before the day before the conference, and when I did notice, it was too late to change it. After my first session, I quickly moved to the other room and setup my main laptop, a Dell Precision. We tested the video output (VGA) and didn’t notice anything special. The projectors are using a fairly high resolution (kudos to the Basel conference center for not having old school 1024x768 projectors anymore, that makes Blend really hard to demo ;) but since everything went great during the first talk, I was not worried. In fact I even had some time to chat with some early attendees about my Microsoft Surface and the Samsung Slate 7, which I had carried with me in addition to the Precision. I just thought it would be nice to show the hardware that Windows 8 can run on, without thinking any further. When the session started, I immediately noticed that the main screen was not showing anything. I thought I had just forgotten to switch to “duplicate” for the video output, and did that with a quick Win-P. However it didn’t “hold”. After 2 seconds, it reverted back to a black display for my attendees. Then I started to really worry. We tried everything, switching from VGA to HDMI, changing the resolution, setting the projector as primary display, but nothing did the trick. The projector was just refusing to show my screen. Now, to show you how despaired I started to be, I even considered using the “extend” setting (which worked just fine), and to use one of the feedback monitors on the floor but really it was super cumbersome. Eventually, my last resort arrived: I started my Samsung Slate 7, which by chance has Visual Studio 12 and Blend 5 installed, plugged the HDMI projector in the dock (yes, I had the dock with me, which I usually don’t!), connected it to internet (had to enter a long password for that), loaded the source code from my main machine using a USB stick and…. finally started to give my presentation. All in all I think we lost about 10 minutes. Amongst the most horrible minutes of my whole life, truly (yes I am blessed, I didn’t have that many horrible minutes in my life ;) I really want to apologize to my attendees. We joked a bit during the attempts to resolve the issue, the reactions I had after the session were all very nice and sympathetic. Only a handful of people left my session while I was having the issues, and I really don’t blame them (who knew how long the problem would last!!). But still, I probably talked at more than 60 sessions over the years, and this was by far my most painful moment. What did I learn? So what did I learn from this? Well from now on I will always have my slate ready with the latest source code, internet connection and every tool I might need during the presentation. This way, if I detect even a hint that the Precision might not work, I will just switch to the Slate. The experience of presenting on the slate is actually not bad at all, it is just a bit slow for my taste, but it does work. By the way, I will be posting the code and slides for my sessions very soon, I just need to “clean it and zip it”. Stay tuned, and thanks again for your patience in that horrible circumstance. Cheers Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Crime Scene Investigation: SQL Server

    - by Rodney Landrum
    “The packages are running slower in Prod than they are in Dev” My week began with this simple declaration from one of our lead BI developers, quickly followed by an emailed spreadsheet demonstrating that, over 5 executions, an extensive ETL process was running average 630 seconds faster on Dev than on Prod. The situation needed some scientific investigation to determine why the same code, the same data, the same schema would yield consistently slower results on a more powerful server. Prod had yet to be officially christened with a “Go Live” date so I had the time, and having recently been binge watching CSI: New York, I also had the inclination. An inspection of the two systems, Prod and Dev, revealed the first surprise: although Prod was indeed a “bigger” system, with double the amount of RAM of Dev, the latter actually had twice as many processor cores. On neither system did I see much sign of resources being heavily taxed, while the ETL process was running. Without any real supporting evidence, I jumped to a conclusion that my years of performance tuning should have helped me avoid, and that was that the hardware differences explained the better performance on Dev. We spent time setting up a Test system, similarly scoped to Prod except with 4 times the cores, and ported everything across. The results of our careful benchmarks left us truly bemused; the ETL process on the new server was slower than on both other systems. We burned more time tweaking server configurations, monitoring IO and network latency, several times believing we’d uncovered the smoking gun, until the results of subsequent test runs pitched us back into confusion. Finally, I decided, enough was enough. Hadn’t I learned very early in my DBA career that almost all bottlenecks were caused by code and database design, not hardware? It was time to get back to basics. With over 100 SSIS packages and hundreds of queries, each handling specific tasks such as file loads, bulk inserts, transforms, logging, and so on, the task seemed formidable. And yet, after barely an hour spent with Profiler, Extended Events, and wait statistics DMVs, I had a lead in the shape of a query that joined three tables, containing millions of rows, returned 3279 results, but performed 239K logical reads. As soon as I looked at the execution plans for the query in Dev and Test I saw the culprit, an implicit conversion warning on a join predicate field that was numeric in one table and a varchar(50) in another! I turned this information over to the BI developers who quickly resolved the data type mismatches and found and fixed “several” others as well. After the schema changes the same query with the same databases ran in under 1 second on all systems and reduced the logical reads down to fewer than 300. The analysis also revealed that on Dev, the ETL task was pulling data across a LAN, whereas Prod and Test were connected across slower WAN, in large part explaining why the same process ran slower on the latter two systems. Loading the data locally on Prod delivered a further 20% gain in performance. As we progress through our DBA careers we learn valuable lessons. Sometimes, with a project deadline looming and pressure mounting, we choose to forget them. I was close to giving into the temptation to throw more hardware at the problem. I’m pleased at least that I resisted, though I still kick myself for not looking at the code on day one. It can seem a daunting prospect to return to the fundamentals of the code so close to roll out, but with the right tools, and surprisingly little time, you can collect the evidence that reveals the true problem. It is a lesson I trust I will remember for my next 20 years as a DBA, if I’m ever again tempted to bypass the evidence.

    Read the article

  • Microsoft Build 2012 Day 1 Keynote Summary

    - by Tim Murphy
    So I have finally dried the tears after watching the Keynote for Build 2012.  This wasn’t because it was an emotional presentation, but because for the second year I missed the goodies.  Each on site attendee got a Surface RT, a Lumia 920 and a voucher for 100GB of SkyDrive storage. The event was opened with the announcement that in the three days since the launch of Windows 8 over 4 million upgrades have been sold.  I don’t care who you are that is an impressive stat.  Ballmer then spent a fair amount of time remaking the case for the Windows and Windows Phone platforms similar to what we have heard over the last to launch events. There were some cool, but non-essential demos.  The one that was the most fun was the Perceptive Pixel 82” slate device.  At first glance I wondered why I would ever want such a device, but then Ballmer explained it’s possible use for schools and boardrooms.  The actually made sense. Then things got strange.  Steve started explaining features that developers could leverage.  Usually this type of information is left to the product leads.  He focused on the integration with the Charms features such as Search and Share. Steve “Guggs” Guggenheim showed off an app that would appeal to my kids from Disney called “Agent P” which is base on Phineas and Ferb.  Then he got to the meat of the presentation.  We found out that you could add a tile that can be used to sell ad space.  In the same vein we also found out that you could use Microsoft’s, Paypal’s or any commerce engine of your own creation or choosing. For those who are interested in sports and especially developing sports apps you would have found the small presentation from Michael Bayle of ESPN.  He introduced the ESPN app which has tons of features.  For the developers in the crowd he also mentioned that ESPN has an API available at developer.espn.com. During the launch events we were told apps were coming.  In this presentation we were actually shown a scrolling list of logos and told about a couple of them.  Ballmer mentioned specifically Twitter, SAP and DropBox.  These are impressive names that were just a couple of the list impressive names. Steve Ballmer addressed the question of why you should develop for the Windows 8 platform.  He feels that Microsoft has the best commercial terms for developers, a better way to build apps than other platforms and a variety of form factors.  His key point though was the available volume of customers given the current Windows install base and assuming even a flat growth of the platform.  This he backed with a promise that Microsoft is going to do better at marketing and you won’t be able to avoid the ads that they are bringing out. The last section of the key note was present by Kevin Gallo from the Windows Phone team.  This was the real reason I tuned into the webcast.  He impressed upon those watching that the strength of developing for the Microsoft platform is the common programming model that now exist.  While there are difference between form factor implementations you can leverage code across them. He claimed that 90% of developer requests for Windows Phone 8 had been implemented.  These include: More controls with better performance Better live tiles including lock screen integration Speech support in custom apps Easier submission to the market place App camera integration VOIP and chat support Bluetooth and NFC support Native C++ development Direct 3D development   The quote from Kevin that stood out for me was that “Take your Dramamine and buckle your seatbelt type of games are coming to Windows Phone 8”.  He back this up by displaying a list of game development frameworks and then having Unity come out and do a demo. Ok, almost done … The last two things of note for me were the announcement that the SDK is immediately available at dev.windowsphone.com and that they were reducing the cost of an individual developer account to $8 for the next 8 days. Let the development commence. del.icio.us Tags: Build 2012,Windows 8,Windows Phone 8,Windows Phone

    Read the article

  • Simple OpenGL program major slow down at high resolution

    - by Grieverheart
    I have created a small OpenGL 3.3 (Core) program using freeglut. The whole geometry is two boxes and one plane with some textures. I can move around like in an FPS and that's it. The problem is I face a big slow down of fps when I make my window large (i.e. above 1920x1080). I have monitors GPU usage when in full-screen and it shows GPU load of nearly 100% and Memory Controller load of ~85%. When at 600x600, these numbers are at about 45%, my CPU is also at full load. I use deferred rendering at the moment but even when forward rendering, the slow down was nearly as severe. I can't imagine my GPU is not powerful enough for something this simple when I play many games at 1080p (I have a GeForce GT 120M btw). Below are my shaders, First Pass #VS #version 330 core uniform mat4 ModelViewMatrix; uniform mat3 NormalMatrix; uniform mat4 MVPMatrix; uniform float scale; layout(location = 0) in vec3 in_Position; layout(location = 1) in vec3 in_Normal; layout(location = 2) in vec2 in_TexCoord; smooth out vec3 pass_Normal; smooth out vec3 pass_Position; smooth out vec2 TexCoord; void main(void){ pass_Position = (ModelViewMatrix * vec4(scale * in_Position, 1.0)).xyz; pass_Normal = NormalMatrix * in_Normal; TexCoord = in_TexCoord; gl_Position = MVPMatrix * vec4(scale * in_Position, 1.0); } #FS #version 330 core uniform sampler2D inSampler; smooth in vec3 pass_Normal; smooth in vec3 pass_Position; smooth in vec2 TexCoord; layout(location = 0) out vec3 outPosition; layout(location = 1) out vec3 outDiffuse; layout(location = 2) out vec3 outNormal; void main(void){ outPosition = pass_Position; outDiffuse = texture(inSampler, TexCoord).xyz; outNormal = pass_Normal; } Second Pass #VS #version 330 core uniform float scale; layout(location = 0) in vec3 in_Position; void main(void){ gl_Position = mat4(1.0) * vec4(scale * in_Position, 1.0); } #FS #version 330 core struct Light{ vec3 direction; }; uniform ivec2 ScreenSize; uniform Light light; uniform sampler2D PositionMap; uniform sampler2D ColorMap; uniform sampler2D NormalMap; out vec4 out_Color; vec2 CalcTexCoord(void){ return gl_FragCoord.xy / ScreenSize; } vec4 CalcLight(vec3 position, vec3 normal){ vec4 DiffuseColor = vec4(0.0); vec4 SpecularColor = vec4(0.0); vec3 light_Direction = -normalize(light.direction); float diffuse = max(0.0, dot(normal, light_Direction)); if(diffuse 0.0){ DiffuseColor = diffuse * vec4(1.0); vec3 camera_Direction = normalize(-position); vec3 half_vector = normalize(camera_Direction + light_Direction); float specular = max(0.0, dot(normal, half_vector)); float fspecular = pow(specular, 128.0); SpecularColor = fspecular * vec4(1.0); } return DiffuseColor + SpecularColor + vec4(0.1); } void main(void){ vec2 TexCoord = CalcTexCoord(); vec3 Position = texture(PositionMap, TexCoord).xyz; vec3 Color = texture(ColorMap, TexCoord).xyz; vec3 Normal = normalize(texture(NormalMap, TexCoord).xyz); out_Color = vec4(Color, 1.0) * CalcLight(Position, Normal); } Is it normal for the GPU to be used that much under the described circumstances? Is it due to poor performance of freeglut? I understand that the problem could be specific to my code, but I can't paste the whole code here, if you need more info, please tell me.

    Read the article

  • MIX 2010 Covert Operations Day 3

    - by GeekAgilistMercenary
    I rolled over to the Mandalay for breakfast.  There I met a couple guys that were really excited about the new Windows 7 Phone.  They, as I, are also hopeful that the phone really gets a big push and some penetration into the market.  Not because we don’t like any other of the phones, but because this phone is so much better in many ways.  From a developer's perspective creating applications in Windows 7 Mobile will be vastly superior in ease, capabilities, and other aspects.  The architectural, existing code base, examples, and provisions to create things on the 7 Mobile Device are already existing as of RIGHT NOW.  There is no reason, except for fickle market conditions, for this phone to not just explode onto the market.  But alas, I won't hold my breath. Day three keynote had a whole new slew of things provided.  It also seemed that things got a lot more technical on this second keynote.  The oData was one of the very technical bits, yet it included almost no code.  Starting with a Netflix example and all the way to the Codename "Dallas" effort the oData Services provide some expansive possibilities. A mash up going 4 ways was then shown for finding a movie, finding local places to have a viewing, and information about the movie and were to prospectively find and buy additional movie bits.  The display was of course, in a Windows 7 Mobile device with literally a click to view each set of data.  The backend and the front end of this was beautifully smooth. The Dallas Project has a lot of potential for analytics in dashboard and scorecard creation also.  If there is a need or reason to provide data to a vast and wide range of clients, Dallas is a prime example of how to do that. Azure Clouds After the main keynote I checked out (while developing a working WPF & Silverlight Application for work) the session on deploying ASP.NET Applications, services, etc, into the cloud.  The session was pretty good, but I'll admit I got a little unfocused from it a few times.  It is after all hard to do two things at one time. I did take note that the cloud still is a multiple step process for deploying to.  This is a good thing and a bad thing.  There needs to be more checks and verifications when deploying something into the cloud just for technical reasons.  However, I feel that there should be some streamlining to the process.  Going back and forth between web and Visual Studio as the interface also seems kind of clunky.  Deployment should be able to be completed from within Visual Studio in my perspective.  Overall, the cloud is getting more and more impressive in function as well as theory. That's it from me so far on the third day of MIX.  I'll be note taking and studying hard to have more good tidbits to provide. Thanks for reading, if you're curious about more of my writing, check out this original entry at my other blog Agilist Mercenary.

    Read the article

  • Oracle Retail Mobile Point-of-Service

    - by David Dorf
    When most people discuss mobile in retail, they immediately go to shopping applications.  While I agree the consumer side of mobile is huge, I believe its also important to arm store associates with mobile tools.  There are around a dozen major roll-outs of mobile POS to chain retailers, and all have been successful.  This does not, however, signal the demise of traditional registers.  Retailers will adopt mobile POS slowly and reduce the number of fixed registers over time, but there's likely to be a combination of both for the foreseeable future.  Even Apple retains at least one fixed register in every store, you just have to know where to look. The business benefits for mobile POS are pretty straightforward: 1. Faster checkout.  Walmart's CFO recently reported that for every second they shave off the average transaction time, they can potentially save $12M a year in labor.  I think its more likely that labor will be redeployed to enhance the customer experience. 2. Smarter associates.  The sales associates on the floor need the same access to information that consumers have, if not more.  They need ready access to product details, reviews, inventory, etc. to meet consumer expectations.  In a recent study, 40% of consumers said a savvy store associate can impact their final product selection more than a website. 3. Lower costs.  Mobile POS hardware (iPod touch + sled) costs about a fifth of fixed registers, not to mention the reclaimed space that can be used for product displays. But almost all Mobile POS solutions can claim those benefits equally.  Where there's differentiation is on the technical side.  Oracle recently announced availability of the Oracle Retail Mobile Point-of-Service, and it has three big technology advantages in the market: 1. Portable. We used a popular open-source component called PhoneGap that abstracts the app from the underlying OS and hardware so that iOS, Android, and other platforms could be supported.  Further, we used Web technologies such as HTML5 and JavaScript, which are commonly known by many programmers, as opposed to ObjectiveC which is more difficult to find.  The screen can adjust to different form-factors and sizes, just like you see with browsers.  In the future when a new, zippy device gets released, retailers will have the option to move to that device more easily than if they used a native app. 2. Flexible.  Our Mobile POS is free with the Oracle Retail Point-of-Service product.  Retailers can use any combination of fixed and mobile registers, and those ratios can change as required.  Perhaps start with 1 mobile and 4 fixed per store, then transition over time to 4 mobile and 1 fixed without any additional software licenses.  Our scalable solution supports lots of combinations. 3. Consistent.  Because our Mobile POS is fully integrated to our traditional POS, the same business logic is reused.  Third-party Mobile POS solutions often handle pricing, promotions, and tax calculations separately leading to possible inconsistencies within the store.  That won't happen with Oracle's solution. For many retailers, Mobile POS can lower costs, increase customer service, and generally enhance a consumer's in-store experience.  Apple led the way, but lots of other retailers are discovering the many benefits of adding mobile capabilities in their stores.  Just be sure to examine both the business and technology benefits so you get the most value from your solution for the longest period of time.

    Read the article

  • Stir Trek 2: Iron Man Edition

    Next month (7 May 2010) Ill be presenting at the second annual Stir Trek event in Columbus, Ohio. Stir Trek (so named because last year its themes mixed MIX and the opening of the Star Trek movie) is a very cool local event.  Its a lot of fun to present at and to attend, because of its unique venue: a movie theater.  And whats more, the cost of admission includes a private showing of a new movie (this year: Iron Man 2).  The sessions cover a variety of topics (not just Microsoft), similar to CodeMash.  The event recently sold out, so Im not telling you all of this so that you can go and sign up (though I believe you can get on the waitlist still).  Rather, this is pretty much just an excuse for me to talk about my session as a way to organize my thoughts. Im actually speaking on the same topic as I did last year, but the key difference is that last year the subject of my session was nowhere close to being released, and this year, its RTM (as of last week).  Thats right, the topic is Whats New in ASP.NET 4 how did you guess? Whats New in ASP.NET 4 So, just what *is* new in ASP.NET 4?  Hasnt Microsoft been spending all of their time on Silverlight and MVC the last few years?  Well, actually, no.  There are some pretty cool things that are now available out of the box in ASP.NET 4.  Theres a nice summary of the new features on MSDN.  Here is my super-brief summary: Extensible Output Caching use providers like distributed cache or file system cache Preload Web Applications IIS 7.5 only; avoid the startup tax for your site by preloading it. Permanent (301) Redirects are finally supported by the framework in one line of code, not two. Session State Compression Can speed up session access in a web farm environment.  Test it to see. Web Forms Features several of which mirror ASP.NET MVC advantages (viewstate, control ids) Set Meta Keywords and Description easily Granular and inheritable control over ViewState Support for more recent browsers and devices Routing (introduced in 3.5 SP1) some new features and zero web.config changes required Client ID control makes client manipulation of DOM elements much simpler. Row Selection in Data Controls fixed (id based, not row index based) FormView and ListView enhancements (less markup, more CSS compliant) New QueryExtender control makes filtering data from other Data Source Controls easy More CSS and Accessibility support Reduction of Tables and more control over output for other template controls Dynamic Data enhancements More control templates Support for inheritance in the Data Model New Attributes ASP.NET Chart Control (learn more) Lots of IDE enhancements Web Deploy tool My session will cover many but not all of these features.  Theres only an hour (3pm-4pm), and its right before the prize giveaway and movie showing, so Ill be moving quickly and most likely answering questions off-line via email after the talk. Hope to see you there! Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL 2012 Licensing Thoughts

    - by Geoff N. Hiten
    The only thing more controversial than new Federal Tax plans is new Licensing plans from Microsoft.  In both cases, everyone calculates several numbers.  First, will I pay more or less under this plan?  Second, will my competition pay more or less than now?  Third, will <insert interesting person/company here> pay more or less?  Not that items 2 and 3 are meaningful, that is just how people think. Much like tax plans, the devil is in the details, so lets see how this looks.  Microsoft shows it here: http://www.microsoft.com/sqlserver/en/us/future-editions/sql2012-licensing.aspx First up is a switch from per-socket to per-core licensing.  Anyone who didn’t see something like this coming should rapidly search for a new line of work because you are not paying attention.  The explosion of multi-core processors has made SQL Server a bargain.  Microsoft is in business to make money and the old per-socket model was not going to do that going forward. Per-core licensing also simplifies virtualization licensing.  Physical Core = Virtual Core, at least for licensing.  Oversubscribe your processors, that’s your lookout.  You still pay for  what is exposed to the VM.  The cool part is you can seamlessly move physical and virtual workloads around and the licenses follow.  The catch is you have to have Software Assurance to make the licenses mobile.  Nice touch there. Let’s have a moment of silence for the late, unlamented, largely ignored Workgroup Edition.  To quote the Microsoft  FAQ:  “Standard becomes our sole edition for basic database needs”.  Considering I haven’t encountered a singe instance of SQL Server Workgroup Edition in the wild, I don’t think this will be all that controversial. As for pricing, it looks like a wash with current per-socket pricing based on four core sockets.  Interestingly, that is the minimum core count Microsoft proposes to swap to transition per-socket to per-core if you are on Software Assurance.  Reading the fine print shows that if you are using more, you will get more core licenses: From the licensing FAQ. 15. How do I migrate from processor licenses to core licenses?  What is the migration path? Licenses purchased with Software Assurance (SA) will upgrade to SQL Server 2012 at no additional cost. EA/EAP customers can continue buying processor licenses until your next renewal after June 30, 2012. At that time, processor licenses will be exchanged for core-based licenses sufficient to cover the cores in use by processor-licensed databases (minimum of 4 cores per processor for Standard and Enterprise, and minimum of 8 EE cores per processor for Datacenter). Looks like the folks who invested in the AMD 12-core chips will make out like bandits. Now, on to something new: SQL Server Business Intelligence Edition. Yep, finally a BI-specific SKU licensed for server+CAL configurations only.  Note that Enterprise Edition still supports the complete feature set; the BI Edition is intended for smaller shops who want to use the full BI feature set but without needing Enterprise Edition scale (or costs).  No, you don’t get ColumnStore, Compression, or Partitioning in the BI Edition.  Those are Enterprise scale features, ThankYouVeryMuch.  Then again, your starting licensing costs are about one sixth of an Enterprise Edition system (based on an 8 core server). The only part of the message I am missing is if the current Failover Licensing Policy will change.  Do we need to fully or partially license failover servers?  That is a detail I definitely want to know.

    Read the article

  • AccelerometerInput XNA GameComponent

    - by Michael B. McLaughlin
    Bad accelerometer controls kill otherwise good games. I decided to try to do something about it. So I create an XNA GameComponent called AccelerometerInput. It’s still a beta project but you are welcome to try it, use it, modify it, etc. I’m releasing under the terms of the Microsoft Public License. Important info: First, it only supports tilt-style controls currently. I have not implemented motion-style controls yet (and make no promises as to when I might find time to do so). Second, I commented it heavily so that you can (hopefully) understand what it is doing. Please read the comments and examine the sample game for a usage overview. There are configurable parameters which I encourage you to make use of (both by modifying the default values where your testing shows it to be appropriate and also by implementing a calibration mechanism in your game that lets the user adjust those configurable values based on his or her own circumstances). Third, even with this code, accelerometer controls are still a fairly advanced topic area; you will likely find nothing but disappointment if you simply plunk this into some project without testing it on a device (or preferably on several devices). Fourth, if you do try this code and find that something doesn’t work as expected on your phone, please let me know as I want to improve it and can only do so with your help. Let me know what phone model it is, what you tried doing, what you expected, and what result you had instead. I may or may not be able to incorporate it into the code, but I can let others know at the very least so that they can make appropriate modifications to their games (I’m hopeful that all phones are reasonably similar in their workings and require, at most, a slight calibration change, but I simply don’t know). Fifth, although I’ll do my best to answer any questions you may have about it, I’m very busy with a number of things currently so it might take a little while. Please look through the code and examine the comments and sample game first before asking any questions. It’s likely that the answer is in there. If not, or if you just aren’t really sure, ask away. Sixth, there are differences between a portrait-mode game and a landscape mode game (specifically in the appropriate default tilt adjustment for toward the user/away from the user calculations). This is documented and the default is set for landscape. If you use this for a portrait game, make the appropriate change (look for the TODO: comment in AccelerometerInput.cs). Seventh, no provision whatsoever is made for disabling screen locking. It is up to you to implement that and to take appropriate measures to detect when the user has been idle for too long and timeout the game. That code is very game-specific. If you have questions about such matters, consult the relevant MSDN documentation and, if you still have questions, visit the App Hub forums and ask there. I answer questions there a lot and so I may even stumble across your question and answer it. But that’s a much better forum than the comments section here for questions of that sort so I would appreciate it if you asked idle detection-related questions there (or on some other suitable site that you may be more familiar and comfortable with). Eighth, this is an XNA GameComponent intended for XNA-based games on WP7. A sufficiently knowledgeable Silverlight developer should have no problem adapting it for use in a Silverlight game or app. I may create a Silverlight version at some point myself. Right now I do not have the time, unfortunately. Ok. Without further ado: http://www.bobtacoindustries.com/developers/utils/AccelerometerInput.zip Have a great St. Patrick’s Day!

    Read the article

  • Simple Merging Of PDF Documents with iTextSharp 5.4.5.0

    - by Mladen Prajdic
    As we were working on our first SQL Saturday in Slovenia, we came to a point when we had to print out the so-called SpeedPASS's for attendees. This SpeedPASS file is a PDF and contains thier raffle, lunch and admission tickets. The problem is we have to download one PDF per attendee and print that out. And printing more than 10 docs at once is a pain. So I decided to make a little console app that would merge multiple PDF files into a single file that would be much easier to print. I used an open source PDF manipulation library called iTextSharp version 5.4.5.0 This is a console program I used. It’s brilliantly named MergeSpeedPASS. It only has two methods and is really short. Don't let the name fool you It can be used to merge any PDF files. The first parameter is the name of the target PDF file that will be created. The second parameter is the directory containing PDF files to be merged into a single file. using iTextSharp.text; using iTextSharp.text.pdf; using System; using System.IO; namespace MergeSpeedPASS { class Program { static void Main(string[] args) { if (args.Length == 0 || args[0] == "-h" || args[0] == "/h") { Console.WriteLine("Welcome to MergeSpeedPASS. Created by Mladen Prajdic. Uses iTextSharp 5.4.5.0."); Console.WriteLine("Tool to create a single SpeedPASS PDF from all downloaded generated PDFs."); Console.WriteLine(""); Console.WriteLine("Example: MergeSpeedPASS.exe targetFileName sourceDir"); Console.WriteLine(" targetFileName = name of the new merged PDF file. Must include .pdf extension."); Console.WriteLine(" sourceDir = path to the dir containing downloaded attendee SpeedPASS PDFs"); Console.WriteLine(""); Console.WriteLine(@"Example: MergeSpeedPASS.exe MergedSpeedPASS.pdf d:\Downloads\SQLSaturdaySpeedPASSFiles"); } else if (args.Length == 2) CreateMergedPDF(args[0], args[1]); Console.WriteLine(""); Console.WriteLine("Press any key to exit..."); Console.Read(); } static void CreateMergedPDF(string targetPDF, string sourceDir) { using (FileStream stream = new FileStream(targetPDF, FileMode.Create)) { Document pdfDoc = new Document(PageSize.A4); PdfCopy pdf = new PdfCopy(pdfDoc, stream); pdfDoc.Open(); var files = Directory.GetFiles(sourceDir); Console.WriteLine("Merging files count: " + files.Length); int i = 1; foreach (string file in files) { Console.WriteLine(i + ". Adding: " + file); pdf.AddDocument(new PdfReader(file)); i++; } if (pdfDoc != null) pdfDoc.Close(); Console.WriteLine("SpeedPASS PDF merge complete."); } } } } Hope it helps you and have fun.

    Read the article

  • DATEFROMPARTS

    - by jamiet
    I recently overheard a remark by Greg Low in which he said something akin to "the most interesting parts of a new SQL Server release are the myriad of small things that are in there that make a developer's life easier" (I'm paraphrasing because I can't remember the actual quote but it was something like that). The new DATEFROMPARTS function is a classic example of that . It simply takes three integer parameters and builds a date out of them (if you have used DateSerial in Reporting Services then you'll understand). Take the following code which generates the first and last day of some given years: SELECT 2008 AS Yr INTO #Years UNION ALL SELECT 2009 UNION ALL SELECT 2010 UNION ALL SELECT 2011 UNION ALL SELECT 2012SELECT [FirstDayOfYear] = CONVERT(DATE,CONVERT(CHAR(8),((y.[Yr] * 10000) + 101))),      [LastDayOfYear] = CONVERT(DATE,CONVERT(CHAR(8),((y.[Yr] * 10000) + 1231)))FROM   #Years y here are the results: That code is pretty gnarly though with those CONVERTs in there and, worse, if the character string is constructed in a certain way then it could fail due to localisation, check this out: SET LANGUAGE french;SELECT dt,Month_Name=DATENAME(mm,dt)FROM   (       SELECT  dt = CONVERT(DATETIME,CONVERT(CHAR(4),y.[Yr]) + N'-01-02')       FROM    #Years y       )d;SET LANGUAGE us_english;SELECT dt,Month_Name=DATENAME(mm,dt)FROM   (       SELECT  dt = CONVERT(DATETIME,CONVERT(CHAR(4),y.[Yr]) + N'-01-02')       FROM    #Years y       )d; Notice how the datetime has been converted differently based on the language setting. When French, the string "2012-01-02" gets interpreted as 1st February whereas when us_english the same string is interpreted as 2nd January. Instead of all this CONVERTing nastiness we have DATEFROMPARTS: SELECT [FirstDayOfYear] = DATEFROMPARTS(y.[Yr],1,1),    [LasttDayOfYear] = DATEFROMPARTS(y.[Yr],12,31)FROM   #Years y How much nicer is that? The bad news of course is that you have to upgrade to SQL Server 2012 or migrate to SQL Azure if you want to use it, as is the way of the world! Don't forget that if you want to try this code out on SQL Azure right this second, for free, you can do so by connecting up to AdventureWorks On Azure. You don't even need to have SSMS handy - a browser that runs Silverlight will do just fine. Simply head to https://mhknbn2kdz.database.windows.net/ and use the following credentials: Database AdventureWorks2012 User sqlfamily Password sqlf@m1ly One caveat, SELECT INTO doesn't work on SQL Azure so you'll have to use this instead: DECLARE @y TABLE ( [Yr] INT);INSERT @y([Yr])SELECT 2008 AS Yr UNION ALL SELECT 2009 UNION ALL SELECT 2010 UNION ALL SELECT 2011 UNION ALL SELECT 2012;SELECT [FirstDayOfYear] = DATEFROMPARTS(y.[Yr],1,1),      [LastDayOfYear] = DATEFROMPARTS(y.[Yr],12,31)FROM @y y;SELECT [FirstDayOfYear] = CONVERT(DATE,CONVERT(CHAR(8),((y.[Yr] * 10000) + 101))),      [LastDayOfYear] = CONVERT(DATE,CONVERT(CHAR(8),((y.[Yr] * 10000) + 1231)))FROM @y y; @Jamiet

    Read the article

  • How do you update live web sites with code changes?

    - by Aaron Anodide
    I know this is a very basic question. If someone could humor me and tell me how they would handle this, I'd be greatful. I decided to post this because I am about to install SynchToy to remedy the issue below, and I feel a bit unprofessional using a "Toy" but I can't think of a better way. Many times I find when I am in this situation, I am missing some painfully obvious way to do things - this comes from being the only developer in the company. ASP.NET web application developed on my computer at work Solution has 2 projects: Website (files) WebsiteLib (C#/dll) Using a Git repository Deployed on a GoGrid 2008R2 web server Deployment: Make code changes. Push to Git. Remote desktop to server. Pull from Git. Overwrite the live files by dragging/dropping with windows explorer. In Step 5 I delete all the files from the website root.. this can't be a good thing to do. That's why I am about to install SynchToy... UPDATE: THANKS for all the useful responses. I can't pick which one to mark answer - between using a web deployment - it looks like I have several useful suggesitons: Web Project = whole site packaged into a single DLL - downside for me I can't push simple updates - being a lone developer in a company of 50, this remains something that is simpler at times. Pulling straight from SCM into web root of site - i originally didn't do this out of fear that my SCM hidden directory might end up being exposed, but the answers here helped me get over that (although i still don't like having one more thing to worry about forgetting to make sure is still true over time) Using a web farm, and systematically deploying to nodes - this is the ideal solution for zero downtime, which is actually something I care about since the site is essentially a real time revenue source for my company - i might have a hard time convincing them to double the cost of the servers though. -- finally, the re-enforcement of the basic principal that there needs to be a single click deployment for the site OR ELSE THERE SOMETHING WRONG is probably the most useful thing I got out of the answers. UPDATE 2: I thought I come back to this and update with the actual solution that's been in place for many months now and is working perfectly (for my single web server solution). The process I use is: Make code changes Push to Git Remote desktop to server Pull from Git Run the following batch script: cd C:\Users\Administrator %systemroot%\system32\inetsrv\appcmd.exe stop site "/site.name:Default Web Site" robocopy Documents\code\da\1\work\Tree\LendingTreeWebSite1 c:\inetpub\wwwroot /E /XF connectionsconfig Web.config %systemroot%\system32\inetsrv\appcmd.exe start site "/site.name:Default Web Site" As you can see this brings the site down, uses robocopy to intelligently copy the files that have changed then brings the site back up. It typically runs in less than 2 seconds. Since peak traffic on this site is about 2 requests per second, missing 4 requests per site update is acceptable. Sine I've gotten more proficient with Git I've found that the first four steps above being a "manual process" is also acceptable, although I'm sure I could roll the whole thing into a single click if I wanted to. The documentation for AppCmd.exe is here. The documentation for Robocopy is here.

    Read the article

  • PowerShell: Read Excel to Create Inserts

    - by BuckWoody
    I’m writing a series of articles on how to migrate “departmental” data into SQL Server. I also hold workshops on the entire process – from discovering that the data exists to the modeling process and then how to design the Extract, Transform and Load (ETL) process. Finally I write about (and teach) a few methods on actually moving the data. One of those options is to use PowerShell. There are a lot of ways even with that choice, but the one I show is to read two columns from the spreadsheet and output statements that would insert the data using a stored procedure. Of course, you could re-write this as INSERT statements, out to a text file for bcp, or even use a database connection in the script to move the data directly from Excel into SQL Server. This snippet won’t run on your system, of course – it assumes a Microsoft Office Excel 2007 spreadsheet located at c:\temp called VendorList.xlsx. It looks for a tab in that spreadsheet called Vendors. The statement that does the writing just uses one column: Vendor Code. Here’s the breakdown of what I’m doing: In the first block, I connect to Microsoft Office Excel. That connection string is specific to Excel 2007, so if you need a different version you’ll need to look that up. In the second block I set up a selection from the entire spreadsheet based on that tab. Note that if you’re only after certain data you shouldn’t get the whole spreadsheet – that’s just good practice. In the next block I create the text I want, inserting the Vendor Code field as I go. Finally I close the connection. Enjoy! $ExcelConnection= New-Object -com "ADODB.Connection" $ExcelFile="c:\temp\VendorList.xlsx" $ExcelConnection.Open("Provider=Microsoft.ACE.OLEDB.12.0;` Data Source=$ExcelFile;Extended Properties=Excel 12.0;") $strQuery="Select * from [Vendors$]" $ExcelRecordSet=$ExcelConnection.Execute($strQuery) do { Write-Host "EXEC sp_InsertVendors '" $ExcelRecordSet.Fields.Item("Vendor Code").Value "'" $ExcelRecordSet.MoveNext()} Until ($ExcelRecordSet.EOF) $ExcelConnection.Close() Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Is there a theory for "transactional" sequences of failing and no-fail actions?

    - by Ross Bencina
    My question is about writing transaction-like functions that execute sequences of actions, some of which may fail. It is related to the general C++ principle "destructors can't throw," no-fail property, and maybe also with multi-phase transactions or exception safety. However, I'm thinking about it in language-neutral terms. My concern is with correctly designing error handling in C++ functions that must be reliable. I would like to know what the concepts below are called so that I can learn more about them. I'm sorry that I can't ask the question more directly. Since I don't know this area I have provided an example to explain my question. The question is at the end. Here goes: Consider a sequence of steps or actions executed sequentially, where actions belong to one of two classes: those that always succeed, and those that may fail. In the examples below: S stands for an action that always succeeds (called "no-fail" in some settings). F stands for an action that may fail (for example, it might fail to allocate memory or do I/O that could fail). Consider a sequences of actions (executed sequentially from left to right): S->S->S->S Since each action in the sequence above succeeds, the whole sequence succeeds. On the other hand, the following sequence may fail because the last action may fail: S->S->S->F So, claim: a sequence has the no-fail (S) property if and only if all of its actions are no-fail. Now, I'm interested in action sequences that form "atomic transactions", with "failure atomicity," i.e. where either the whole sequence completes successfully, or there is no effect. I.e. if some action fails, the earlier ones must be rolled back. This requires that any successfully executed actions prior to a failing action must always be able to be rolled back. Consider the sequence: S->S->S->F S<-S<-S In the example above, the first row is the forward path of the transaction, and the second row are inverse actions (executed from right to left) that can be used to roll back if the final top row actions fails. It seems to me that for a transaction to support failure atomicity, the following invariant must hold: Claim: To support failure atomicity (either completion or complete roll-back on failure) all actions preceding the latest failable (F) action on the forward path (marked * in the example below) must have no-fail (S) inverses. The following is an example of a sequence that supports failure atomicity: * S->F->F->F S<-S<-S Further, if we want the transaction to be able to attempt cancellation mid-way through, but still guarantee either full completion or full rollback then we need the following property: Claim: To support failure atomicity and cancellation mid-way through execution, in the face of errors in the inverse (cancellation) path, all actions following the earliest failable (F) inverse on the reverse path (marked *) must be no-fail (S). F->F->F->S->S S<-S<-F<-F * I believe that these two conditions guarantee that an abortable/cancelable transaction will never get "stuck". My questions are: What is the study and theory of these properties called? are my claims correct? and what else is there to know? UPDATE 1: Updated terminology: what I previously called "robustness" is called atomicity in the database literature. UPDATE 2: Added explicit reference to failure atomicity, which seems to be a thing.

    Read the article

  • Additional new material WebLogic Community

    - by JuergenKress
    Virtual Developer Conference On Demand - Register Updated Book: WebLogic 12c: Distinctive Recipes - Architecture, Development, Administration by Oracle ACE Director Frank Munz - Blog | YouTube Webcast: Migrating from GlassFish to WebLogic - Replay Reliance Commercial Finance Accelerates Time-to-Market, Improves IT Staff Productivity by 70% - Blog | Oracle Magazine Retrieving WebLogic Server Name and Port in ADF Application by Andrejus Baranovskis, Oracle Ace Director - Blog Using Oracle WebLogic 12c with NetBeans IDEOracle ACE Director Markus Eisele walks you through installing and configuring all the necessary components, and helps you get started with a simple Hello World project. Read the article. Video: Oracle A-Team ADF Mobile Persistence SampleThis video by Oracle Fusion Middleware A-Team architect Steven Davelaar demonstrates how to use the ADF Mobile Persistence Sample JDeveloper extension to generate a fully functional ADF Mobile application that reads and writes data using an ADF BC SOAP web service. Watch the video. Java ME 8 ReleaseDownload Java ME today! This release is an implementation of the Java ME 8 standards JSR 360 (CLDC 8) and JSR 361 (MEEP 8), and includes support of alignment with Java SE 8 language features and APIs, an enhanced services-enabled application platform, the ability to "right-size" the platform to address a wide range of target devices, and more. Learn more Download Java ME SDK 8It includes application development support for Oracle Java ME Embedded 8 platforms and includes plugins for NetBeans 8. See the Java ME 8 Developer Tools Documentation to learn JavaOne 2014 Early Bird RateRegister early to save $400 off the onsite price. With the release of Java 8 this year, we have exciting new sessions and an interactive demo space! NetBeans IDE 8.0 Patch UpdateThe NetBeans Team has released a patch for NetBeans IDE 8.0. Download it today to get fixes that enhance stability and performance. Java 8 Questions ForumFor any questions about this new release, please join the conversation on the Java 8 Questions Forum. Java ME 8: Getting Started with Samples and Demo CodeLearn in few steps how to get started with Java ME 8! The New Java SE 8 FeaturesJava SE 8 introduces enhancements such as lambda expressions that enable you to write more concise yet readable code, better utilize multicore systems, and detect more errors at compile time. See What's New in JDK 8 and the new Java SE 8 documentation portal. Pay Less for Java-Related Books!Save 20% on all new Oracle Press books related to Java. Download the free preview sampler for the Java 8 book written by Herbert Schildt, Maurice Naftain, Henrik Ebbers and J.F. DiMarzio. New book: EJB 3 in Action, Second Edition WebLogic 12c Does WebSockets Getting Started by C2B2 Video: Building Robots with Java Embedded Video: Nighthacking TV Watch presentations by Stephen Chin and community members about Java SE, Java Embedded, Java EE, Hadoop, Robots and more. Migrating the Spring Pet Clinic to Java EE 7 Trip report : Jozi JUG Java Day in Johannesburg How to Build GlassFish 4 from Source 4,000 posts later : The Aquarium WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Implementing Service Level Agreements in Enterprise Manager 12c for Oracle Packaged Applications

    - by Anand Akela
    Contributed by Eunjoo Lee, Product Manager, Oracle Enterprise Manager. Service Level Management, or SLM, is a key tool in the proactive management of any Oracle Packaged Application (e.g., E-Business Suite, Siebel, PeopleSoft, JD Edwards E1, Fusion Apps, etc.). The benefits of SLM are that administrators can utilize representative Application transactions, which are constantly and automatically running behind the scenes, to verify that all of the key application and technology components of an Application are available and performing to expectations. A single transaction can verify the availability and performance of the underlying Application Tech Stack in a much more efficient manner than by monitoring the same underlying targets individually. In this article, we’ll be demonstrating SLM using Siebel Applications, but the same tools and processes apply to any of the Package Applications mentioned above. In this demonstration, we will log into the Siebel Application, navigate to the Contacts View, update a contact phone record, and then log-out. This transaction exposes availability and performance metrics of multiple Siebel Servers, multiple Components and Component Groups, and the Siebel Database - in a single unified manner. We can then monitor and manage these transactions like any other target in EM 12c, including placing pro-active alerts on them if the transaction is either unavailable or is not performing to required levels. The first step in the SLM process is recording the Siebel transaction. The following screenwatch demonstrates how to record Siebel transaction using an EM tool called “OpenScript”. A completed recording is called a “Synthetic Transaction”. The second step in the SLM process is uploading the Synthetic Transaction into EM 12c, and creating Generic Service Tests. We can create a Generic Service Test to execute our synthetic transactions at regular intervals to evaluate the performance of various business flows. As these transactions are running periodically, it is possible to monitor the performance of the Siebel Application by evaluating the performance of the synthetic transactions. The process of creating a Generic Service Test is detailed in the next screenwatch. EM 12c provides a guided workflow for all of the key creation steps, including configuring the Service Test, uploading of the Synthetic Test, determining the frequency of the Service Test, establishing beacons, and selecting performance and usage metrics, just to name a few. The third and final step in the SLM process is the creation of Service Level Agreements (SLA). Service Level Agreements allow Administrators to utilize the previously created Service Tests to specify expected service levels for Application availability, performance, and usage. SLAs can be created for different time periods and for different Service Tests. This last screenwatch demonstrates the process of creating an SLA, as well as highlights the Dashboards and Reports that Administrators can use to monitor Service Test results. Hopefully, this article provides you with a good start point for creating Service Level Agreements for your E-Business Suite, Siebel, PeopleSoft, JD Edwards E1, or Fusion Applications. Enterprise Manager Cloud Control 12c, with the Application Management Suites, represents a quick and easy way to implement Service Level Management capabilities at customer sites. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Google+ |  Newsletter

    Read the article

  • How do you formulate the Domain Model in Domain Driven Design properly (Bounded Contexts, Domains)?

    - by lko
    Say you have a few applications which deal with a few different Core Domains. The examples are made up and it's hard to put a real example with meaningful data together (concisely). In Domain Driven Design (DDD) when you start looking at Bounded Contexts and Domains/Sub Domains, it says that a Bounded Context is a "phase" in a lifecycle. An example of Context here would be within an ecommerce system. Although you could model this as a single system, it would also warrant splitting into separate Contexts. Each of these areas within the application have their own Ubiquitous Language, their own Model, and a way to talk to other Bounded Contexts to obtain the information they need. The Core, Sub, and Generic Domains are the area of expertise and can be numerous in complex applications. Say there is a long process dealing with an Entity for example a Book in a core domain. Now looking at the Bounded Contexts there can be a number of phases in the books life-cycle. Say outline, creation, correction, publish, sale phases. Now imagine a second core domain, perhaps a store domain. The publisher has its own branch of stores to sell books. The store can have a number of Bounded Contexts (life-cycle phases) for example a "Stock" or "Inventory" context. In the first domain there is probably a Book database table with basically just an ID to track the different book Entities in the different life-cycles. Now suppose you have 10+ supporting domains e.g. Users, Catalogs, Inventory, .. (hard to think of relevant examples). For example a DomainModel for the Book Outline phase, the Creation phase, Correction phase, Publish phase, Sale phase. Then for the Store core domain it probably has a number of life-cycle phases. public class BookId : Entity { public long Id { get; set; } } In the creation phase (Bounded Context) the book could be a simple class. public class Book : BookId { public string Title { get; set; } public List<string> Chapters { get; set; } //... } Whereas in the publish phase (Bounded Context) it would have all the text, release date etc. public class Book : BookId { public DateTime ReleaseDate { get; set; } //... } The immediate benefit I can see in separating by "life-cycle phase" is that it's a great way to separate business logic so there aren't mammoth all-encompassing Entities nor Domain Services. A problem I have is figuring out how to concretely define the rules to the physical layout of the Domain Model. A. Does the Domain Model get "modeled" so there are as many bounded contexts (separate projects etc.) as there are life-cycle phases across the core domains in a complex application? Edit: Answer to A. Yes, according to the answer by Alexey Zimarev there should be an entire "Domain" for each bounded context. B. Is the Domain Model typically arranged by Bounded Contexts (or Domains, or both)? Edit: Answer to B. Each Bounded Context should have its own complete "Domain" (Service/Entities/VO's/Repositories) C. Does it mean there can easily be 10's of "segregated" Domain Models and multiple projects can use it (the Entities/Value Objects)? Edit: Answer to C. There is a complete "Domain" for each Bounded Context and the Domain Model (Entity/VO layer/project) isn't "used" by the other Bounded Contexts directly, only via chosen paths (i.e. via Domain Events). The part that I am trying to figure out is how the Domain Model is actually implemented once you start to figure out your Bounded Contexts and Core/Sub Domains, particularly in complex applications. The goal is to establish the definitions which can help to separate Entities between the Bounded Contexts and Domains.

    Read the article

  • How do you report out user research results?

    - by user12277104
    A couple weeks ago, one of my mentees asked to meet, because she wanted my advice on how to report out user research results. She had just conducted her first usability test for her new employer, and was getting to the point where she wanted to put together some slides, but she didn't want them to be boring. She wanted to talk with me about what to present and how best to present results to stakeholders. While I couldn't meet for another week, thanks to slideshare, I could quickly point her in the direction that my in-person advice would have led her. First, I'd put together a panel for the February 2012 New Hampshire UPA monthly meeting that we then repeated for the 2012 Boston UPA annual conference. In this panel, I described my reporting techniques, as did six of my colleagues -- two of whom work for companies smaller than mine, and four of whom are independent consultants. Before taking questions, we each presented for 3 to 5 minutes on how we presented research results. The differences were really interesting. For example, when do you really NEED a long, written report (as opposed to an email, spreadsheet, or slide deck with callouts)? When you are reporting your test results to the FDA -- that makes sense. in this presentation, I describe two modes of reporting results that I use.  Second, I'd been a participant in the CUE-9 study. CUE stands for Comparative Usability Evaluation, and this was the 9th of these studies that Rolf Molich had designed. Originally, the studies were designed to show the variability in evaluation methods practitioners use to evaluate websites and applications. Of course, using methods and tasks of their own choosing, the results were wildly different. However, in this 9th study, the tasks were the same, the participants were the same, and the problem severity scale was the same, so how would the results of the 19 practitioners compare? Still wildly variable. But for the purposes of this discussion, it gave me a work product that was not proprietary to the company I work for -- a usability test report that I could share publicly. This was the way I'd been reporting results since 2005, and pretty much what I still do, when time allows.  That said, I have been continuing to evolve my methods and reporting techniques, and sometimes, there is no time to create that kind of report -- the team can't wait the days that it takes to take screen shots, go through my notes, refer back to recordings, and write it all up. So in those cases, I use bullet points in email, talk through the findings with stakeholders in a 1-hour meeting, and then post the take-aways on a wiki page. There are other requirements for that kind of reporting to work -- for example, the stakeholders need to attend each of the sessions, and the sessions can't take more than a day to complete, but you get the idea: there is no one "right" way to report out results. If the method of reporting you are using is giving your stakeholders the information they need, in a time frame in which it is useful, and in a format that meets their needs (FDA report or bullet points on a wiki), then that's the "right" way to report your results. 

    Read the article

  • Friday Tips #6, Part 1

    - by Chris Kawalek
    We have a two parter this week, with this post focusing on desktop virtualization and the next one on server virtualization. Question: Why would I use the Oracle Secure Global Desktop Secure Gateway? Answer by Rick Butland, Principal Sales Consultant, Oracle Desktop Virtualization: Well, for the benefit of those who might not be familiar with client connections in Oracle Secure Global Desktop (SGD), let me back up and briefly explain. An SGD client connects to an SGD server using two distinct protocols, which, by default, require two distinct TCP ports. The first is the HTTP protocol, used by the web browser to connect to the SGD webserver on TCP port 80, or if secure connections are enabled (SSL/TLS), then TCP port 443, commonly identified as the "HTTPS" port, that is, "SSL encrypted HTTP." The second protocol from the client to the server is the Adaptive Internet Protocol, or AIP, which is used for displaying applications, transferring drive mapping data, print jobs, and so on. By default, AIP uses the TCP port 3104, or port 5307 when SSL is enabled. When SGD clients need to access SGD over a firewall, the ports that AIP requires are typically "closed"; and most administrators are reluctant, to put it mildly, to change their firewall configurations to allow AIP traffic on 3144/5307.   To avoid this problem, SGD introduced "Firewall Forwarding", a technique where, in effect, both http and AIP traffic are "multiplexed" onto a single "well-known" TCP port, that is port 443, the https port.  This is also known as single-port firewall traversal.  This technique takes advantage of the fact that, as a "well-known service", port 443 is usually "open",   allowing (encrypted) traffic to pass. At the target SGD server, the two protocols are de-multiplexed and routed appropriately. The Secure Gateway was developed in response to requirements from customers for SGD to support multi-stage DMZ's, and to avoid exposing SGD servers and the information they contain directly to connections from the Internet. The Secure Gateway acts as a reverse-proxy in the first-tier of the DMZ, accepting, authenticating, and terminating incoming client connections, and then re-encrypting the connections, and proxying them, routing them on to SGD servers, deeper in the network. The client no longer needs to know the name/IP address of the SGD servers in their network, they connect to the gateway, only. The gateway takes care of those internal network details.     The Secure Gateway supports the same "single-port firewall" capability as does "Firewall Forwarding", but offers the additional advantage of load-balancing incoming client connections amongst SGD array members, which could be cumbersome without a forward-deployed secure gateway. Load-balancing weights and policies can be monitored and tuned using the "Balancer Manager" application, and Apache mod_proxy_balancer directives.   Going forward, our architects recommend the use of the Secure Gateway over "Firewall Forwarding" for single-port firewall traversal, due to its architectural advantages, its greater flexibility and enhanced features.  Finally, it should be noted that the Secure Gateway is not separately priced; any licensed SGD customer may use the Secure Gateway component at no additional cost.   For more information, see the "Secure Gateway Administrator's Guide".

    Read the article

  • Webcast Q&A: ResCare Solves Content Lifecycle Challenges with Oracle WebCenter

    - by Kellsey Ruppel
    Last week we had the fourth webcast in our WebCenter in Action webcast series, "ResCare Solves Content Lifecycle Challenges with Oracle WebCenter", where customer Joe Lichtefeld from ResCare and Wayne Boerger & Doug Thompson from Oracle Partner TEAM Informatics shared how Oracle WebCenter is powering allowing ResCare to solve content lifecycle challenges, reduce compliance and business risks, and increase adoption of intranet as primary business communication tool In case you missed it, here's a recap of the Q&A.   Joe Lichtefeld, ResCare  Q: Did you run into any issues in the deployment of the platform?A: We experienced very few issues when implementing the content management and search functionalities. There were some challenges in determining the metadata structure. We tried to find a fine balance between having enough fields to provide the functionality needed, but trying to limit the impact to the contributing members.  Q: What has been the biggest benefit your end users have seen?A: The biggest benefit to date is two-fold. Content on the intranet can be maintained by the individual contributors more timely than in our old process of all requests being updated by IT. The other big benefit is the ability to find the most current version of a document instead of relying on emails and phone calls to track down the "current" version. Q: Was there any resistance internally when implementing the solution? If so, how did you overcome that?A: We experienced very little resistance. Most of our community groups were eager to be able to contribute and maintain their information. We had the normal hurdles of training and follow-up training with implementing a new system and process. As our second phase rolled out access to all employees, we have received more positive feedback on the accessibility of information. Wayne Boerger & Doug Thompson, TEAM Informatics Q: Can you integrate multiple repositories with the Google Search Appliance? Yes, the Google Search Appliance is designed to index lots of different repositories, from both public and internal sources. There are included connectors to many repositories, such as SharePoint, databases, file systems, LDAP, and with the TEAM GSA Connector and the Oracle Content Server. And the index for these repositories can be configured into different collections depending on the use cases that each customer has, and really, for each need within a customer environment. Q: How many different filters can you add when the search results are returned? A: Presuming this question is about the filtering on the search results. You can add as many filters as you like and it can be done by collection or any number of other criteria. Most importantly, customers now have the ability to limit the returned content by a set metadata value. Q: With the TEAM Sites Connector, what types of content can you sync? A: There’s really no limit; if it can be checked into the content server, then it is eligible for sync into Sites.  So basically, any digital file that has relevance to a Sites implementation can be checked into the WC Content central repository and then the connector can/will manage it. Q: Using the Connector, are there any limitations around where in Sites that synced content can be used? A: There are no limitations about where it can be used. When setting up your environment to use it, you just need to think through the different destinations on the Sites side that might use the content; that way you’ve got the right information to create the rules needed for the connector. If you missed the webcast, be sure to catch the replay to see a live demonstration of WebCenter in action!  ResCare Solves Content Lifecycle Challenges with Oracle WebCenter from Oracle WebCenter

    Read the article

  • X server with nvidia driver crashing: 12.04

    - by Raster
    My X server consistently crashes. It seems like this is happening when the X server is idle. This behaviour is new with 12.04. This is only happening on the second display of a multiseat system. Is there a configuration change I can make to stop this? X.Org X Server 1.11.3 Release Date: 2011-12-16 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.42-26-generic x86_64 Ubuntu Current Operating System: Linux Desktop 3.2.0-29-generic #46-Ubuntu SMP Fri Jul 27 17:03:23 UTC 2012 x86_64 Kernel command line: BOOT_IMAGE=/vmlinuz-3.2.0-29-generic root=/dev/mapper/Group1-Root ro ramdisk_size=512000 quiet splash vt.handoff=7 Build Date: 04 August 2012 01:51:23AM xorg-server 2:1.11.4-0ubuntu10.7 (For technical support please see http://www.ubuntu.com/support) Current version of pixman: 0.24.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.1.log", Time: Sun Sep 2 22:37:06 2012 (==) Using config file: "/etc/X11/xorg.conf" (==) Using system config directory "/usr/share/X11/xorg.conf.d" Backtrace: 0: /usr/bin/X (xorg_backtrace+0x26) [0x7fcee86be846] 1: /usr/bin/X (0x7fcee8536000+0x18c6ea) [0x7fcee86c26ea] 2: /lib/x86_64-linux-gnu/libpthread.so.0 (0x7fcee785c000+0xfcb0) [0x7fcee786bcb0] 3: /usr/lib/x86_64-linux-gnu/xorg/extra-modules/nvidia_drv.so (0x7fcee14ba000+0x902a9) [0x7fcee154a2a9] 4: /usr/lib/x86_64-linux-gnu/xorg/extra-modules/nvidia_drv.so (0x7fcee14ba000+0xfd5e7) [0x7fcee15b75e7] 5: /usr/lib/x86_64-linux-gnu/xorg/extra-modules/nvidia_drv.so (0x7fcee14ba000+0x4d6b92) [0x7fcee1990b92] 6: /usr/lib/x86_64-linux-gnu/xorg/extra-modules/nvidia_drv.so (0x7fcee14ba000+0x4d74d5) [0x7fcee19914d5] 7: /usr/lib/x86_64-linux-gnu/xorg/extra-modules/nvidia_drv.so (0x7fcee14ba000+0x4d767d) [0x7fcee199167d] 8: /usr/bin/X (0x7fcee8536000+0x1196ec) [0x7fcee864f6ec] 9: /usr/bin/X (0x7fcee8536000+0xe8ad5) [0x7fcee861ead5] 10: /usr/bin/X (0x7fcee8536000+0xe9d45) [0x7fcee861fd45] /etc/X11/xorg.conf Section "ServerLayout" Identifier "Desktop" Screen 0 "DesktopScreen" 0 0 InputDevice "DesktopMouse" "CorePointer" InputDevice "DesktopKeyboard" "CoreKeyboard" Option "AutoAddDevices" "false" Option "AllowEmptyInput" "true" Option "AutoEnableDevices" "false" EndSection Section "ServerLayout" Identifier "Desktop2" Screen 1 "Desktop2Screen" 0 0 InputDevice "Desktop2Mouse" "CorePointer" InputDevice "Desktop2Keyboard" "CoreKeyboard" Option "AutoAddDevices" "false" Option "AllowEmptyInput" "true" Option "AutoEnableDevices" "false" EndSection Section "Module" Load "dbe" Load "extmod" Load "type1" Load "freetype" Load "glx" EndSection Section "Files" EndSection Section "ServerFlags" Option "AutoAddDevices" "false" Option "AutoEnableDevices" "false" Option "AllowMouseOpenFail" "on" Option "AllowEmptyInput" "on" Option "ZapWarning" "on" Option "HandleSepcialKeys" "off" # Zapping on Option "DRI2" "on" Option "Xinerama" "0" EndSection # Desktop Mouse Section "InputDevice" Identifier "DesktopMouse" Driver "evdev" Option "Device" "/dev/input/event3" Option "Protocol" "auto" Option "GrabDevice" "on" Option "Emulate3Buttons" "no" Option "Buttons" "5" Option "ZAxisMapping" "4 5" Option "SendCoreEvents" "true" EndSection # Desktop2 Mouse Section "InputDevice" Identifier "Desktop2Mouse" Driver "evdev" Option "Device" "/dev/input/event5" Option "Protocol" "auto" Option "GrabDevice" "on" Option "Emulate3Buttons" "no" Option "Buttons" "5" Option "ZAxisMapping" "4 5" Option "SendCoreEvents" "true" EndSection Section "InputDevice" Identifier "DesktopKeyboard" Driver "evdev" Option "Device" "/dev/input/event4" Option "XkbRules" "xorg" Option "XkbModel" "105" Option "XkbLayout" "us" Option "Protocol" "Standard" Option "GrabDevice" "on" EndSection Section "InputDevice" Identifier "Desktop2Keyboard" Driver "evdev" Option "Device" "/dev/input/event6" Option "XkbRules" "xorg" Option "XkbModel" "105" Option "XkbLayout" "us" Option "Protocol" "Standard" Option "GrabDevice" "on" EndSection Section "Monitor" Identifier "Desktop2Monitor" VendorName "Acer" ModelName "Acer G235H" HorizSync 30.0 - 83.0 VertRefresh 56.0 - 75.0 Option "DPMS" EndSection Section "Monitor" Identifier "DesktopMonitor" VendorName "Acer" ModelName "Acer H213H" HorizSync 30.0 - 83.0 VertRefresh 56.0 - 75.0 Option "DPMS" EndSection Section "Device" Identifier "EVGACard" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 560 Ti" Option "Coolbits" "1" BusID "PCI:2:0:0" Screen 0 EndSection Section "Device" Identifier "XFXCard" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 9800" Option "Coolbits" "1" BusID "PCI:5:0:0" Screen 0 EndSection Section "Screen" Identifier "DesktopScreen" Device "EVGACard" Monitor "DesktopMonitor" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Desktop2Screen" Device "XFXCard" Monitor "Desktop2Monitor" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • Numerically stable(ish) method of getting Y-intercept of mouse position?

    - by Fraser
    I'm trying to unproject the mouse position to get the position on the X-Z plane of a ray cast from the mouse. The camera is fully controllable by the user. Right now, the algorithm I'm using is... Unproject the mouse into the camera to get the ray: Vector3 p1 = Vector3.Unproject(new Vector3(x, y, 0), 0, 0, width, height, nearPlane, farPlane, viewProj; Vector3 p2 = Vector3.Unproject(new Vector3(x, y, 1), 0, 0, width, height, nearPlane, farPlane, viewProj); Vector3 dir = p2 - p1; dir.Normalize(); Ray ray = Ray(p1, dir); Then get the Y-intercept by using algebra: float t = -ray.Position.Y / ray.Direction.Y; Vector3 p = ray.Position + t * ray.Direction; The problem is that the projected position is "jumpy". As I make small adjustments to the mouse position, the projected point moves in strange ways. For example, if I move the mouse one pixel up, it will sometimes move the projected position down, but when I move it a second pixel, the project position will jump back to the mouse's location. The projected location is always close to where it should be, but it does not smoothly follow a moving mouse. The problem intensifies as I zoom the camera out. I believe the problem is caused by numeric instability. I can make minor improvements to this by doing some computations at double precision, and possibly abusing the fact that floating point calculations are done at 80-bit precision on x86, however before I start micro-optimizing this and getting deep into how the CLR handles floating point, I was wondering if there's an algorithmic change I can do to improve this? EDIT: A little snooping around in .NET Reflector on SlimDX.dll: public static Vector3 Unproject(Vector3 vector, float x, float y, float width, float height, float minZ, float maxZ, Matrix worldViewProjection) { Vector3 coordinate = new Vector3(); Matrix result = new Matrix(); Matrix.Invert(ref worldViewProjection, out result); coordinate.X = (float) ((((vector.X - x) / ((double) width)) * 2.0) - 1.0); coordinate.Y = (float) -((((vector.Y - y) / ((double) height)) * 2.0) - 1.0); coordinate.Z = (vector.Z - minZ) / (maxZ - minZ); TransformCoordinate(ref coordinate, ref result, out coordinate); return coordinate; } // ... public static void TransformCoordinate(ref Vector3 coordinate, ref Matrix transformation, out Vector3 result) { Vector3 vector; Vector4 vector2 = new Vector4 { X = (((coordinate.Y * transformation.M21) + (coordinate.X * transformation.M11)) + (coordinate.Z * transformation.M31)) + transformation.M41, Y = (((coordinate.Y * transformation.M22) + (coordinate.X * transformation.M12)) + (coordinate.Z * transformation.M32)) + transformation.M42, Z = (((coordinate.Y * transformation.M23) + (coordinate.X * transformation.M13)) + (coordinate.Z * transformation.M33)) + transformation.M43 }; float num = (float) (1.0 / ((((transformation.M24 * coordinate.Y) + (transformation.M14 * coordinate.X)) + (coordinate.Z * transformation.M34)) + transformation.M44)); vector2.W = num; vector.X = vector2.X * num; vector.Y = vector2.Y * num; vector.Z = vector2.Z * num; result = vector; } ...which seems to be a pretty standard method of unprojecting a point from a projection matrix, however this serves to introduce another point of possible instability. Still, I'd like to stick with the SlimDX Unproject routine rather than writing my own unless it's really necessary.

    Read the article

  • Mastering snow and Java development at jDays in Gothenburg

    - by JavaCecilia
    Last weekend, I took the train from Stockholm to Gothenburg to attend and present at the new Java developer conference jDays. It was professionally arranged in the Swedish exhibition hall close to the amusement park Liseberg and we got a great deal out of the top-level presenters and hallway discussions. Understanding and Improving Your Java Process Our main purpose was to spread information on JVM and our monitoring tools for Java processes, so I held a crash course in the most important terms and concepts if you want to affect the performance of your Java process. From the beginning - the JVM specification to interpretation of heap usage graphs. For correct analysis, you also need to understand something about process memory - you need space for the Java heap (-Xms for initial size and -Xmx for max heap size), but the process memory also contain the thread stacks (to a size of -Xss), JVM internal data structures used for keeping track of Java objects on the heap, method compilation/optimization, native libraries, etc. If you get long pause times, make sure to monitor your application, see the allocation rate and frequency of pause times.My colleague Klara Ward then held a presentation on the Java Mission Control product, the profiling and diagnostics tools suite for HotSpot, coming soon. The room was packed and very appreciated, Klara demonstrated four different scenarios, e.g. how to diagnose and fix latencies due to lock contention for logging.My German colleague, OpenJDK ambassador Dalibor Topic travelled to Sweden to do the second keynote on "Make the Future Java". He let us in on the coming features and roadmaps of Java, now delivering major versions on a two-year schedule (Java 7 2011, Java 8 2013, etc). Also letting us in on where to download early versions of 8, to report problems early on. Software Development in teams Being a scout leader, I'm drilled in different team building and workshop techniques, creating strong groups - of course, I had to attend Henrik Berglund's session on building successful teams. He spoke about the importance of clear goals, autonomy and agreed processes. Thomas Sundberg ended the conference by doing live remote pair programming with Alex in Rumania and a concrete tips for people wanting to try it out (for local collaboration, remember to wash and change clothes). Memory Master Keynote The conference keynote was delivered by the Swedish memory master Mattias Ribbing, showing off by remembering the order of a deck of cards he'd seen once. He made it interactive by forcing the audience to learn a memory mastering technique of remembering ten ordered things by heart, asking us to shout out the order backwards and we made it! I desperately need this - bought the book, will get back on the subject. Continuous Delivery The most impressive presenter was Axel Fontaine on Continuous Delivery. Very well prepared slides with key images of his message and moved about the stage like a rock star. The topic is of course highly interesting, how to create an infrastructure enabling immediate feedback to developers and ability to release your product several times per day. Tomek Kaczanowski delivered a funny and useful presentation on good and bad tests, providing comic relief with poorly written tests and the useful rules of thumb how to rewrite them. To conclude, we had a great time and hope to see you at jDays next year :)

    Read the article

< Previous Page | 580 581 582 583 584 585 586 587 588 589 590 591  | Next Page >