Search Results

Search found 16208 results on 649 pages for 'pass community'.

Page 597/649 | < Previous Page | 593 594 595 596 597 598 599 600 601 602 603 604  | Next Page >

  • First Foray&ndash;About timeout

    - by SQLMonger
    It has been quite a while since I signed up for this blog site and high time that something was posted.  I have a list of topics that I will be working through and posting.  Some I am sure will have been posted by others, but I will be sticking to the technical problems and challenges that I’ve recently faced, and the solutions that worked for me.  My motto when learning something new has always been “My kingdom for an example!”, and I plan on delivering useful examples here so others can learn from my efforts, failures and successes.   A bit of background about me… My name is Clayton Groom. I am a founding partner of a consulting firm in St. Louis Missouri, Covenant Technology Partners, LLC and focus on SQL Server Data Warehouse design, Analysis Services and Enterprise Reporting solutions.  I have been working with SQL Server since the early nineties, when it still only ran on OS/2. I love solving puzzles and technical challenges.   Enough about me… On to a real problem… SSIS Connection Time outs versus Command Time outs Last week, I was working on automating the processing for a large Analysis Services cube.  I had reworked an SSIS package and script task originally posted by Vidas Matelis that automates the process of adding new and dropping old partitions to/from an Analysis Services cube.  I had the package working great, tested, and ready for deployment.  It basically performs a query against the source system to determine if there is new data in the warehouse that will require a new partition to be added to the cube, and it checks the cube to see if there are any partitions that are present that are no longer needed in a rolling 60 month window. My client uses Tivoli for running all their production jobs, and not SQL Agent, so I had to build a command line file for Tivoli to use to run the package. Everything was going great. I had tested the command file from my development workstation using an XML configuration file to pass in server-specific parameters into the package when executed using the DTExec utility. With all the pieces ready, I updated the dtsconfig file to point to the UAT environment and started working with the Tivoli developer to test the job.  On the first run, the job failed, and from what I could see in the SSIS log, it had failed because of a timeout. Other errors in the log made me think that perhaps the connection string had not been passed into the package correctly. We bumped the Connection Manager  timeout values from 20 seconds to 120 seconds and tried again. The job still failed. After changing the command line to use the /SET option instead of the /CONFIGFILE option, we tested again, and again failure. After a number more failed attempts, and getting the Teradata DBA involved to monitor and see if we were connecting and failing or just failing to connect, we determined that the job was indeed connecting to the server and then disconnecting itself after 30 seconds.  This seemed odd, as we had the timeout values for the connection manager set to 180 seconds by then.  At this point one of the DBA’s found a post on the Teradata forum that had the clues to the puzzle: There is a separate “CommandTimeout” custom property on the Data source object that may needed to be adjusted for longer running queries.  I opened up the SSIS package, opened the data flow task that generated the partition list table and right-clicked on the data source. from the context menu, I selected “Show Advanced Editor” and found the property. Sure enough, it was set to 30 seconds. The CommandTimeout property can also be edited in the SSIS Properties sheet. In order to determine how long the timeout needed to be, I ran the query from the task in the development environment and received a response in a matter of seconds.  I then tried the same query against the production database and waited several minutes for a response. This did not seem to be a reasonable response time for the query involved, and indeed it wasn’t. The Teradata DBA’s adjusted the query governor settings for the service account I was testing with, and we were able to get the response back down under a minute.  Still, I set the CommandTimeout property to a much higher value in case the job was ever started during a time of high-demand on the production server. With this change in place, the job finally completed successfully.  The lesson learned for me was two-fold: Always compare query execution times between development and production environments, and don’t assume that production will always be faster.  With higher user demands, query governors, and a whole lot more data, the execution time of even what might seem to be simple queries can vary greatly. SSIS Connection time out settings do not affect command time outs.  Connection timeouts control how long the package will wait for a response from the server before assuming the server is not available or is not responding. Command time outs control how long a task will wait for results to start being returned before deciding that the server is not responding. Both lessons seem pretty straight forward, and I felt pretty sheepish once I finally figured out what the issue was.  To be fair though, In the 5+ years that I have been working with SSIS, I could only recall one other time where I had to set the CommandTimeout property, and that memory only resurfaced while I was penning this post.

    Read the article

  • A deadlock was detected while trying to lock variables in SSIS

    Error: 0xC001405C at SQL Log Status: A deadlock was detected while trying to lock variables "User::RowCount" for read/write access. A lock cannot be acquired after 16 attempts. The locks timed out. Have you ever considered variable locking when building your SSIS packages? I expect many people haven’t just because most of the time you never see an error like the one above. I’ll try and explain a few key concepts about variable locking and hopefully you never will see that error. First of all, what is all this variable locking all about? Put simply SSIS variables have to be locked before they can be accessed, and then of course unlocked once you have finished with them. This is baked into SSIS, presumably to reduce the risk of race conditions, but with that comes some additional overhead in that you need to be careful to avoid lock conflicts in some scenarios. The most obvious place you will come across any hint of locking (no pun intended) is the Script Task or Script Component with their ReadOnlyVariables and ReadWriteVariables properties. These two properties allow you to enter lists of variables to be used within the task, or to put it another way, these lists of variables to be locked, so that they are available within the task. During the task pre-execute phase the variables and locked, you then use them during the execute phase when you code is run, and then unlocked for you during the post-execute phase. So by entering the variable names in one of the two list, the locking is taken care of for you, and you just read and write to the Dts.Variables collection that is exposed in the task for the purpose. As you can see in the image above, the variable PackageInt is specified, which means when I write the code inside that task I don’t have to worry about locking at all, as shown below. public void Main() { // Set the variable value to something new Dts.Variables["PackageInt"].Value = 199; // Raise an event so we can play in the event handler bool fireAgain = true; Dts.Events.FireInformation(0, "Script Task Code", "This is the script task raising an event.", null, 0, ref fireAgain); Dts.TaskResult = (int)ScriptResults.Success; } As you can see as well as accessing the variable, hassle free, I also raise an event. Now consider a scenario where I have an event hander as well as shown below. Now what if my event handler uses tries to use the same variable as well? Well obviously for the point of this post, it fails with the error quoted previously. The reason why is clearly illustrated if you consider the following sequence of events. Package execution starts Script Task in Control Flow starts Script Task in Control Flow locks the PackageInt variable as specified in the ReadWriteVariables property Script Task in Control Flow executes script, and the On Information event is raised The On Information event handler starts Script Task in On Information event handler starts Script Task in On Information event handler attempts to lock the PackageInt variable (for either read or write it doesn’t matter), but will fail because the variable is already locked. The problem is caused by the event handler task trying to use a variable that is already locked by the task in Control Flow. Events are always raised synchronously, therefore the task in Control Flow that is raising the event will not regain control until the event handler has completed, so we really do have un-resolvable locking conflict, better known as a deadlock. In this scenario we can easily resolve the problem by managing the variable locking explicitly in code, so no need to specify anything for the ReadOnlyVariables and ReadWriteVariables properties. public void Main() { // Set the variable value to something new, with explicit lock control Variables lockedVariables = null; Dts.VariableDispenser.LockOneForWrite("PackageInt", ref lockedVariables); lockedVariables["PackageInt"].Value = 199; lockedVariables.Unlock(); // Raise an event so we can play in the event handler bool fireAgain = true; Dts.Events.FireInformation(0, "Script Task Code", "This is the script task raising an event.", null, 0, ref fireAgain); Dts.TaskResult = (int)ScriptResults.Success; } Now the package will execute successfully because the variable lock has already been released by the time the event is raised, so no conflict occurs. For those of you with a SQL Engine background this should all sound strangely familiar, and boils down to getting in and out as fast as you can to reduce the risk of lock contention, be that SQL pages or SSIS variables. Unfortunately we cannot always manage the locking ourselves. The Execute SQL Task is very often used in conjunction with variables, either to pass in parameter values or get results out. Either way the task will manage the locking for you, and will fail when it cannot lock the variables it requires. The scenario outlined above is clear cut deadlock scenario, both parties are waiting on each other, so it is un-resolvable. The mechanism used within SSIS isn’t actually that clever, and whilst the message says it is a deadlock, it really just means it tried a few times, and then gave up. The last part of the error message is actually the most accurate in terms of the failure, A lock cannot be acquired after 16 attempts. The locks timed out.  Now this may come across as a recommendation to always manage locking manually in the Script Task or Script Component yourself, but I think that would be an overreaction. It is more of a reminder to be aware that in high concurrency scenarios, especially when sharing variables across multiple objects, locking is important design consideration. Update – Make sure you don’t try and use explicit locking as well as leaving the variable names in the ReadOnlyVariables and ReadWriteVariables lock lists otherwise you’ll get the deadlock error, you cannot lock a variable twice!

    Read the article

  • Many Different Things Rolled into a Ball

    - by MOSSLover
    Yeah I know I don’t blog much anymore, because life has taken me places that don’t involve the interwebs unfortunately.  I am in the midst of planning two events, starting a non for profit, creating more sessions for various conferences, submitting to various conferences, working a 40 hour a week job, attempting to hang out with boyfriend/friends/family.  So you can see that list does not include this blog sadly that’s how it goes sometimes.  The bottom piece very important over any of the top pieces.  I haven’t seen St. Louis in a while and I get to go back.  I was gone from home for MVP Summit and Best Practices Conference, so the boyfriend and cat didn’t get to see me either for a bit.  Then you have to add in the whole toilet being broken fiasco this week.  Maintenance really thought it would be cool to turn off the ability to flush.  I mean who does that?  Then when we call the owner he comes by turns it on and we figure it was an accident, because well the next day no one came by to tell us there was a leak.  It was all kinds of strangeness and involved me running to other people’s toilets.  As Dan Usher would say, I was a sad panda for a few days.  So I guess I wanted to post a few thoughts here just because I can.  I do not like multiple content editor webparts embedded with html files in numerous pages doing the same thing.  I will tell you why I don’t like these particular webparts and the way they are being used.  First off if you have a bunch of pages with script includes it’s about time you should just dump them into the masterpage.  Why bother finding all 20 pages and changing those pages when you can just use a single masterpage that already exists? The other thing that is bothering me days is screen scraping.  Just don’t do it, because in 2010 you will find the UI is substantially slower.  I understand you are new and you have no idea what to do.  You are also using 2007 am I right?  So then you need to go to codeplex.com and type in a search for SPServices.  Download it, use it, love it and then have it’s babies (well maybe don’t go so far this is not the GRID in Tron). If you have a ton of constants in your code why did you not go in and create a webpart with a bunch of properties and/or link to a configuration list hidden in the browser?  This type of property and list could help you out in the long run.  The power users and administrators can now change the control without you having to compile it over and over again.  It’s good stuff.  Also, you can change the control without compiling it, especially in 2007 where you have to do a farm solution.  In 2010 you can do a sandbox solution I guess, but shouldn’t you make it as easy and supportable as possible for other users? In conclusion I’m an angry person when it comes to viewing something repeatedly and analyzing it in a system.  Now we will move on to the next topic…MVP Summit…So yeah I can’t really talk about particulars, but I can talk about my experience as a person.  Don’t build something up to be cooler than it is only to be dropped from your 10,000 foot perch.  My experience was great, but the content overall was something to be desired.  It’s ok I got to meet a lot of people I would not have met if I had not gone.  Some of it was surreal, such as product group members showing up and talking to us.  It was pretty neat.  Plus I never had the chance to get to that mythical MS Office in Redmond.  Prior to Summit it was like Rainbow Brites unicorn trying taunting me on television when I was a kid.  So I guess with all that said I give it a B.  It was awesome in some way, but lacking in other ways.  The cool part is that I got to go.  Would I have lived without going? Yes, but it was still cool. I could prattle on about other things and make this post massive, but I’m going to pass and give myself a piece of Sunday to play Rockband and do 800 other things.  I hope the two of you who read this blog are well.  I’ll catch you all at another juncture.  Have a good weekend and varying holidays in between. Technorati Tags: SharePoint,MVP Summit,JQuery,Javascript

    Read the article

  • Why Is Vertical Resolution Monitor Resolution so Often a Multiple of 360?

    - by Jason Fitzpatrick
    Stare at a list of monitor resolutions long enough and you might notice a pattern: many of the vertical resolutions, especially those of gaming or multimedia displays, are multiples of 360 (720, 1080, 1440, etc.) But why exactly is this the case? Is it arbitrary or is there something more at work? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Trojandestroy recently noticed something about his display interface and needs answers: YouTube recently added 1440p functionality, and for the first time I realized that all (most?) vertical resolutions are multiples of 360. Is this just because the smallest common resolution is 480×360, and it’s convenient to use multiples? (Not doubting that multiples are convenient.) And/or was that the first viewable/conveniently sized resolution, so hardware (TVs, monitors, etc) grew with 360 in mind? Taking it further, why not have a square resolution? Or something else unusual? (Assuming it’s usual enough that it’s viewable). Is it merely a pleasing-the-eye situation? So why have the display be a multiple of 360? The Answer SuperUser contributor User26129 offers us not just an answer as to why the numerical pattern exists but a history of screen design in the process: Alright, there are a couple of questions and a lot of factors here. Resolutions are a really interesting field of psychooptics meeting marketing. First of all, why are the vertical resolutions on youtube multiples of 360. This is of course just arbitrary, there is no real reason this is the case. The reason is that resolution here is not the limiting factor for Youtube videos – bandwidth is. Youtube has to re-encode every video that is uploaded a couple of times, and tries to use as little re-encoding formats/bitrates/resolutions as possible to cover all the different use cases. For low-res mobile devices they have 360×240, for higher res mobile there’s 480p, and for the computer crowd there is 360p for 2xISDN/multiuser landlines, 720p for DSL and 1080p for higher speed internet. For a while there were some other codecs than h.264, but these are slowly being phased out with h.264 having essentially ‘won’ the format war and all computers being outfitted with hardware codecs for this. Now, there is some interesting psychooptics going on as well. As I said: resolution isn’t everything. 720p with really strong compression can and will look worse than 240p at a very high bitrate. But on the other side of the spectrum: throwing more bits at a certain resolution doesn’t magically make it better beyond some point. There is an optimum here, which of course depends on both resolution and codec. In general: the optimal bitrate is actually proportional to the resolution. So the next question is: what kind of resolution steps make sense? Apparently, people need about a 2x increase in resolution to really see (and prefer) a marked difference. Anything less than that and many people will simply not bother with the higher bitrates, they’d rather use their bandwidth for other stuff. This has been researched quite a long time ago and is the big reason why we went from 720×576 (415kpix) to 1280×720 (922kpix), and then again from 1280×720 to 1920×1080 (2MP). Stuff in between is not a viable optimization target. And again, 1440P is about 3.7MP, another ~2x increase over HD. You will see a difference there. 4K is the next step after that. Next up is that magical number of 360 vertical pixels. Actually, the magic number is 120 or 128. All resolutions are some kind of multiple of 120 pixels nowadays, back in the day they used to be multiples of 128. This is something that just grew out of LCD panel industry. LCD panels use what are called line drivers, little chips that sit on the sides of your LCD screen that control how bright each subpixel is. Because historically, for reasons I don’t really know for sure, probably memory constraints, these multiple-of-128 or multiple-of-120 resolutions already existed, the industry standard line drivers became drivers with 360 line outputs (1 per subpixel). If you would tear down your 1920×1080 screen, I would be putting money on there being 16 line drivers on the top/bottom and 9 on one of the sides. Oh hey, that’s 16:9. Guess how obvious that resolution choice was back when 16:9 was ‘invented’. Then there’s the issue of aspect ratio. This is really a completely different field of psychology, but it boils down to: historically, people have believed and measured that we have a sort of wide-screen view of the world. Naturally, people believed that the most natural representation of data on a screen would be in a wide-screen view, and this is where the great anamorphic revolution of the ’60s came from when films were shot in ever wider aspect ratios. Since then, this kind of knowledge has been refined and mostly debunked. Yes, we do have a wide-angle view, but the area where we can actually see sharply – the center of our vision – is fairly round. Slightly elliptical and squashed, but not really more than about 4:3 or 3:2. So for detailed viewing, for instance for reading text on a screen, you can utilize most of your detail vision by employing an almost-square screen, a bit like the screens up to the mid-2000s. However, again this is not how marketing took it. Computers in ye olden days were used mostly for productivity and detailed work, but as they commoditized and as the computer as media consumption device evolved, people didn’t necessarily use their computer for work most of the time. They used it to watch media content: movies, television series and photos. And for that kind of viewing, you get the most ‘immersion factor’ if the screen fills as much of your vision (including your peripheral vision) as possible. Which means widescreen. But there’s more marketing still. When detail work was still an important factor, people cared about resolution. As many pixels as possible on the screen. SGI was selling almost-4K CRTs! The most optimal way to get the maximum amount of pixels out of a glass substrate is to cut it as square as possible. 1:1 or 4:3 screens have the most pixels per diagonal inch. But with displays becoming more consumery, inch-size became more important, not amount of pixels. And this is a completely different optimization target. To get the most diagonal inches out of a substrate, you want to make the screen as wide as possible. First we got 16:10, then 16:9 and there have been moderately successful panel manufacturers making 22:9 and 2:1 screens (like Philips). Even though pixel density and absolute resolution went down for a couple of years, inch-sizes went up and that’s what sold. Why buy a 19″ 1280×1024 when you can buy a 21″ 1366×768? Eh… I think that about covers all the major aspects here. There’s more of course; bandwidth limits of HDMI, DVI, DP and of course VGA played a role, and if you go back to the pre-2000s, graphics memory, in-computer bandwdith and simply the limits of commercially available RAMDACs played an important role. But for today’s considerations, this is about all you need to know. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Mocking property sets

    - by mehfuzh
    In this post, i will be showing how you can mock property sets with your expected values or even action using JustMock. To begin, we have a sample interface: public interface IFoo {     int Value { get; set; } } Now,  we can create a mock that will throw on any call other than the one expected, generally its a strict mock and we can do it like: bool expected = false;  var foo = Mock.Create<IFoo>(BehaviorMode.Strict);  Mock.ArrangeSet(() => { foo.Value = 1; }).DoInstead(() => expected  = true);    foo.Value = 1;    Assert.True(expected); Here , the method for running though our expectation for set is Mock.ArrangeSet , where we can directly set our expectations or can even set matchers into it like: var foo = Mock.Create<IFoo>(BehaviorMode.Strict);   Mock.ArrangeSet(() => foo.Value = Arg.Matches<int>(x => x > 3));   foo.Value = 4; foo.Value = 5;   Assert.Throws<MockException>(() => foo.Value = 3);   In the example, any set for value not satisfying matcher expression will throw an MockException as this is a strict mock but what will be the case for loose mocks, where we also have to assert it. Here, let’s take an interface with an indexed property. Indexers are treated in the same way as properties, as with basic indexers let you access your class if it were an array. public interface IFooIndexed {     string this[int key] { get; set; } } We want to  setup a value for a particular index,  we then will pass that mock to some implementer where it will be actually called. Once done, we want to assert that if it has been invoked properly. var foo = Mock.Create<IFooIndexed>();   Mock.ArrangeSet(() => foo[0] = "ping");   foo[0] = "ping";   Mock.AssertSet(() => foo[0] = "ping"); In the above example, both the values are user defined, it might happen that we want to make it more dynamic, In this example, i set it up for set with any value and finally checked if it is set with the one i am looking for. var foo = Mock.Create<IFooIndexed>();   Mock.ArrangeSet(() => foo[0] = Arg.Any<string>());   foo[0] = "ping";   Mock.AssertSet(() => foo[0] = Arg.Matches<string>(x => string.Compare("ping", x) == 0)); This is more or less of mocking user sets , but we can further have it to throw exception or even do our own task for a particular set , like : Mock.ArrangeSet(() => foo.MyProperty = 10).Throws(new ArgumentException()); Or  bool expected = false;  var foo = Mock.Create<IFoo>(BehaviorMode.Strict);  Mock.ArrangeSet(() => { foo.Value = 1; }).DoInstead(() => expected  = true);    foo.Value = 1;    Assert.True(expected); Or call the original setter , in this example it will throw an NotImplementedExpectation var foo = Mock.Create<FooAbstract>(BehaviorMode.Strict); Mock.ArrangeSet(() => { foo.Value = 1; }).CallOriginal(); Assert.Throws<NotImplementedException>(() => { foo.Value = 1; });   Finally, try all these, find issues, post them to forum and make it work for you :-). Hope that helps,

    Read the article

  • Updating a database connection password using a script

    - by Tim Dexter
    An interesting customer requirement that I thought was worthy of sharing today. Thanks to James for the requirement and Bryan for the proposed solution and me for testing the solution and proving it works :0) A customers implementation of Sarbanes Oxley requires them to change all database account passwords every 90 days. This is scripted leveraging shell scripts today for most of their environments. But how can they manage the BI Publisher connections? Now, the customer is running 11g and therefore using weblogic on the middle tier, which is the first clue to Bryans proposed solution. To paraphrase and embellish Bryan's solution a little; why not use a JNDI connection from BIP to the database. Then employ the web logic scripting engine to make updates to the JNDI as needed? BIP is completely uninvolved and with a little 'timing' users will be completely unaware of the password updates i.e. change the password when reports are not being executed. Perfect! James immediately tracked down the WLST script that could be used here, http://middlewaremagic.com/weblogic/?p=4261 (thanks Ravish) Now it was just a case of testing the theory. Some steps: Create the JNDI connection in WLS Create the JNDI connection in BI Publisher pointing to the WLS connection Build new data models using or re-point data sources to use the JNDI connection. Create the WLST script to update the WLS JNDI password as needed. Test! Some details. Creating the JNDI connection in web logic is pretty straightforward. Log into hte console and look for Data Sources under the Services section of the home page and click it Click New >> Generic Datasource Give the connection a name. For the JNDI name, prefix it with 'jdbc/' so I have 'jdbc/localdb' - this name is important you'll need it on the BIP side. Select your db type - this will influence the drivers and information needed on the next page. Being a company man, Im using an Oracle db. Click Next Select the driver of choice, theres lots I know, you can read about them I just chose 'Oracle's Driver (Thin) for Instance connections; Versions 9.0.1 and later' Click Next >> Next Fill out the db name (SID), server, port, username to connect and password >> Next Test the config to ensure you can connect. >> Next Now you need to deploy the connection to your BI server, select it and click Next. You're done with the JNDI config. Creating the JNDI connection on the Publisher side is covered here. Just remember to the connection name you created in WLS e.g. 'jdbc/localdb' Not gonna tell you how to do this, go read the user guide :0) Suffice to say, it works. This requires a little reading around the subject to understand the scripting engine and how to execute scripts. Nicely covered here. However a bit of googlin' and I found an even easier way of running the script. ${ServerHome}/common/bin/wlst.sh updatepwd.py Where updatepwd.py is my script file, it can be in another directory. As part of the wlst.sh script your environment is set up for you so its very simple to execute. The nitty gritty: Need to take Ravish's script above and create a file with a .py extension. Its going to need some modification, as he explains on the web page, to make it work in your environment. I played around with it for a while but kept running into errors. The script as is, tries to loop through all of your connections and modify the user and passwords for each. Not quite what we are looking for. Remember our requirement is to just update the password for a given connection. I also found another issue with the script. WLS 10.x does not allow updates to passwords using clear type ie un-encrypted text while the server is in production mode. Its a bit much to set it back to developer mode bounce it, change the passwords and then bounce and then change back to production and bounce again. After lots of messing about I finally came up with the following: ############################################################################# # # Update password for JNDI connections # ############################################################################# print("*** Trying to Connect.... *****") connect('weblogic','welcome1','t3://localhost:7001') print("*** Connected *****") edit() startEdit() print ("*** Encrypt the password ***") en = encrypt('hr') print "Encrypted pwd: ", en print ("*** Changing pwd for LocalDB ***") dsName = 'LocalDB' print 'Changing Password for DataSource ', dsName cd('/JDBCSystemResources/'+dsName+'/JDBCResource/'+dsName+'/JDBCDriverParams/'+dsName) set('PasswordEncrypted',en) save() activate() Its pretty simple and you can expand on it to loop through the data sources and change each as needed. I have hardcoded the password into the file but you can pass it as a parameter as needed using the properties file method. Im not going to get into the detail of that here but its covered with an example here. Couple of points to note: 1. The change to the password requires a server bounce to get the changes picked up. You can add that to the shell script you will use to call the script above. 2. The script above needs to be run from the MW_HOME\user_projects\domains\bifoundation_domain directory to get the encryption libraries set correctly. My command to run the whole script was: d:\oracle\bi_mw\wlserver_10.3\common\bin\wlst.cmd updatepwd.py - where wlst.cmd is the scripting command line and updatepwd.py was my update password script above. I have not quite spoon fed everything you need to make it a robust script but at least you know you can do it and you can work out the rest I think :0)

    Read the article

  • 3 Trends for SMBs around Social, Mobile, and Sensor

    - by Socially_Aware_Enterprise
    While I often am talking to big companies or discussing enterprise solutions. There are times when individuals ask me about Small or Medium sized business trends.  Interestingly,  the Enterprise Social, Mobile, and Sensor initiatives I regularly discuss are in fact related to even the Mom and Pop storefront. The eco-system of new service players in the Social-Mobile-Sensor space generally emerge developing partnerships with enterprises as they develop and bring economy to scale to their services for the larger market. And of course Oracle has an entire division dedicated for delivering products and support to help emerging companies compete without the need to open an industrial strength credit line.. So here are some trends that we are helping large enterprises to deploy today, but small and medium businesses should be able to take advantage of by the end of this year and starting into 2015. 1) The typical small business is generally "Localized". But the ability to be "Hyper-Localized" will come as location based services become ubiquitous. Many small businesses have one or several storefronts and theirs are typically within a single regional economic footprint. While the internet provides global reach, it will be the businesses that invest in social, mobile and local that will win in the end.  Of course I am a huge SoMoLo evangelist. The SMBs' content and targeting with platforms for Geo-Fencing, Geo-Conquesting and Path-Matching to HHI are all going to be accessible to them, if not for Mobile Apps, then via Mobile messaging in Social Networks that offer it.. Expect to be able to target FaceBook messaging not by city, but by store or mall… This makes being able to be "Hyper-Local" even more important. And with new proximity services coming online more than ever before, SMBs will operate and service customers with pinpoint accuracy right down to where they stand in an aisle. Geo-Conquesting will be huge for small players to place ads when customers pass through competitors regions. Car Dealers are doing this now.. But also of course iBeacons are now very cheap and getting easier to put in retail stores. The ability for sales to happen anywhere in the store via a mobile phone or tablet is huge, as it will give the small shop the flexibility to not have to "Guard the Register" as more or most transactions will be digital. Thus, M-Commerce and T-Commerce will change the job of cashier dramatically.. 2) Intra-Brand Advocacy, the idea now is that rather than just depend on your trusty social media manager and his team, you are going to push more and more individuals with expertise inside the organization to help manage, reach-out, and utilize social channels to manage the incoming questions and answers customers need. While for years CRM was the tool of the enterprise, today CRMs enable this now "Salesforce et al" capability to trickle throughout the company. This gives greater pressure to organize roles, but also flatten out the organization. Internal collaboration around topics and customer needs is going to be the key for SMBs to finally get serious about customer experiences. Their customers are online and in social networks. This includes not just B2C SMBs but also B2B companies as well. Don't believe me? To find the players just use hashtag #SocialSelling and you will see… 3) The Visual Networks will begin to move from Content Aggregators to Content Collaboration platforms, which means Pinterest, Instagram, Vine, & others will begin to move to add more features brands want, first marketing platforms, rather than unique brand partnerships as they do today, but this will open ways for SMBs to engage with clear brand messaging and metrics. Eventually providing more "Collaboration" between Brand and Consumer.. Don't think for a minute Facebook bought Oculus Rift so you could see your timeline in 3-D. The Social Networks I advise customers to invest in are ones that are audio and visual intrinsically. Players from SoundCloud to Pinterest are deploying ways for brands to harness their interactive visual or audio based social networks to sell ad units aka brand messaging. While the Social Media revolution is going on, the emphasis was on the social, today it more and more about the media in social, that enterprises soon small and medium businesses will be connected to. 

    Read the article

  • FAQ&ndash;Highlight GridView Row on Click and Retain Selected Row on Postback

    - by Vincent Maverick Durano
    A couple of months ago I’ve written a simple demo about “Highlighting GridView Row on MouseOver”. I’ve noticed many members in the forums (http://forums.asp.net) are asking how to highlight row in GridView and retain the selected row across postbacks. So I’ve decided to write this post to demonstrate how to implement it as reference to others who might need it. In this demo I going to use a combination of plain JavaScript and jQuery to do the client-side manipulation. I presumed that you already know how to bind the grid with data because I will not include the codes for populating the GridView here. For binding the gridview you can refer this post: Binding GridView with Data the ADO.Net way or this one: GridView Custom Paging with LINQ. To get started let’s implement the highlighting of GridView row on row click and retain the selected row on postback.  For simplicity I set up the page like this: <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2>You have selected Row: (<asp:Label ID="Label1" runat="server" />)</h2> <asp:HiddenField ID="hfCurrentRowIndex" runat="server"></asp:HiddenField> <asp:HiddenField ID="hfParentContainer" runat="server"></asp:HiddenField> <asp:Button ID="Button1" runat="server" onclick="Button1_Click" Text="Trigger Postback" /> <asp:GridView ID="grdCustomer" runat="server" AutoGenerateColumns="false" onrowdatabound="grdCustomer_RowDataBound"> <Columns> <asp:BoundField DataField="Company" HeaderText="Company" /> <asp:BoundField DataField="Name" HeaderText="Name" /> <asp:BoundField DataField="Title" HeaderText="Title" /> <asp:BoundField DataField="Address" HeaderText="Address" /> </Columns> </asp:GridView> </asp:Content>   Note: Since the action is done at the client-side, when we do a postback like (clicking on a button) the page will be re-created and you will lose the highlighted row. This is normal because the the server doesn't know anything about the client/browser not unless if you do something to notify the server that something has changed. To persist the settings we will use some HiddenFields control to store the data so that when it postback we can reference the value from there. Now here’s the JavaScript functions below: <asp:content id="Content1" runat="server" contentplaceholderid="HeadContent"> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js" type="text/javascript"></script> <script type="text/javascript">       var prevRowIndex;       function ChangeRowColor(row, rowIndex) {           var parent = document.getElementById(row);           var currentRowIndex = parseInt(rowIndex) + 1;                 if (prevRowIndex == currentRowIndex) {               return;           }           else if (prevRowIndex != null) {               parent.rows[prevRowIndex].style.backgroundColor = "#FFFFFF";           }                 parent.rows[currentRowIndex].style.backgroundColor = "#FFFFD6";                 prevRowIndex = currentRowIndex;                 $('#<%= Label1.ClientID %>').text(currentRowIndex);                 $('#<%= hfParentContainer.ClientID %>').val(row);           $('#<%= hfCurrentRowIndex.ClientID %>').val(rowIndex);       }             $(function () {           RetainSelectedRow();       });             function RetainSelectedRow() {           var parent = $('#<%= hfParentContainer.ClientID %>').val();           var currentIndex = $('#<%= hfCurrentRowIndex.ClientID %>').val();           if (parent != null) {               ChangeRowColor(parent, currentIndex);           }       }          </script> </asp:content>   The ChangeRowColor() is the function that sets the background color of the selected row. It is also where we set the previous row and rowIndex values in HiddenFields.  The $(function(){}); is a short-hand for the jQuery document.ready function. This function will be fired once the page is posted back to the server that’s why we call the function RetainSelectedRow(). The RetainSelectedRow() function is where we referenced the current selected values stored from the HiddenFields and pass these values to the ChangeRowColor) function to retain the highlighted row. Finally, here’s the code behind part: protected void grdCustomer_RowDataBound(object sender, GridViewRowEventArgs e) { if (e.Row.RowType == DataControlRowType.DataRow) { e.Row.Attributes.Add("onclick", string.Format("ChangeRowColor('{0}','{1}');", e.Row.ClientID, e.Row.RowIndex)); } } The code above is responsible for attaching the javascript onclick event for each row and call the ChangeRowColor() function and passing the e.Row.ClientID and e.Row.RowIndex to the function. Here’s the sample output below:   That’s it! I hope someone find this post useful! Technorati Tags: jQuery,GridView,JavaScript,TipTricks

    Read the article

  • CodePlex Daily Summary for Thursday, September 06, 2012

    CodePlex Daily Summary for Thursday, September 06, 2012Popular Releasesmenu4web: menu4web 0.4.1 - javascript menu for web sites: This release is for those who believe that global variables are evil. menu4web has been wrapped into m4w singleton object. Added "Vertical Tabs" example which illustrates object notation.WinRT XAML Toolkit: WinRT XAML Toolkit - 1.2.1: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features AsyncUI extensions Controls and control extensions Converters Debugging helpers Imaging IO helpers VisualTree helpers Samples Recent changes NOTE: Namespace changes DebugConsol...iPDC - Free Phasor Data Concentrator: iPDC-v1.3.1: iPDC suite version-1.3.1, Modifications and Bug Fixed (from v 1.3.0) New User Manual for iPDC-v1.3.1 available on websites. Bug resolved : PMU Simulator TCP connection error and hang connection for client (PDC). Now PMU Simulator (server) can communicate more than one PDCs (clients) over TCP and UDP parallely. PMU Simulator is now sending the exact data frames as mentioned in data rate by user. PMU Simulator data rate has been verified by iPDC database entries and PMU Connection Tes...Microsoft SQL Server Product Samples: Database: AdventureWorks OData Feed: The AdventureWorks OData service exposes resources based on specific SQL views. The SQL views are a limited subset of the AdventureWorks database that results in several consuming scenarios: CompanySales Documents ManufacturingInstructions ProductCatalog TerritorySalesDrilldown WorkOrderRouting How to install the sample You can consume the AdventureWorks OData feed from http://services.odata.org/AdventureWorksV3/AdventureWorks.svc. You can also consume the AdventureWorks OData fe...Desktop Google Reader: 1.4.6: Sorting feeds alphabetical is now optional (see preferences window)DotNetNuke® Community Edition CMS: 06.02.03: Major Highlights Fixed issue where mailto: links were not working when sending bulk email Fixed issue where uses did not see friendship relationships Problem is in 6.2, which does not show in the Versions Affected list above. Fixed the issue with cascade deletes in comments in CoreMessaging_Notification Fixed UI issue when using a date fields as a required profile property during user registration Fixed error when running the product in debug mode Fixed visibility issue when...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.65: Fixed null-reference error in the build task constructor.Active Forums for DotNetNuke CMS: Active Forums 5.0.0 RC: RC release of Active Forums 5.0.Droid Explorer: Droid Explorer 0.8.8.7 Beta: Bug in the display icon for apk's, will fix with next release Added fallback icon if unable to get the image/icon from the Cloud Service Removed some stale plugins that were either out dated or incomplete. Added handler for *.ab files for restoring backups Added plugin to create device backups Backups stored in %USERPROFILE%\Android Backups\%DEVICE_ID%\ Added custom folder icon for the android backups directory better error handling for installing an apk bug fixes for the Runn...BI System Monitor: v2.1: Data Audits report and supporting SQL, and SSIS package Environment Overview report enhancements, improving the appearance, addition of data audit finding indicators Note: SQL 2012 version coming soon.Hidden Capture (HC): Hidden Capture 1.1: Hidden Capture 1.1 by Mohsen E.Dawatgar http://Hidden-Capture.blogfa.comExt Spec: Ext Spec 0.2.1: Refined examples and improved distribution options.The Visual Guide for Building Team Foundation Server 2012 Environments: Version 1: --Nearforums - ASP.NET MVC forum engine: Nearforums v8.5: Version 8.5 of Nearforums, the ASP.NET MVC Forum Engine. New features include: Built-in search engine using Lucene.NET Flood control improvements Notifications improvements: sync option and mail body View Roadmap for more details webdeploy package sha1 checksum: 961aff884a9187b6e8a86d68913cdd31f8deaf83WiX Toolset: WiX Toolset v3.6: WiX Toolset v3.6 introduces the Burn bootstrapper/chaining engine and support for Visual Studio 2012 and .NET Framework 4.5. Other minor functionality includes: WixDependencyExtension supports dependency checking among MSI packages. WixFirewallExtension supports more features of Windows Firewall. WixTagExtension supports Software Id Tagging. WixUtilExtension now supports recursive directory deletion. Melt simplifies pure-WiX patching by extracting .msi package content and updating .w...Iveely Search Engine: Iveely Search Engine (0.2.0): ????ISE?0.1.0??,?????,ISE?0.2.0?????????,???????,????????20???follow?ISE,????,??ISE??????????,??????????,?????????,?????????0.2.0??????,??????????。 Iveely Search Engine ?0.2.0?????????“??????????”,??????,?????????,???????,???????????????????,????、????????????。???0.1.0????????????: 1. ??“????” ??。??????????,?????????,???????????????????。??:????????,????????????,??????????????????。??????。 2. ??“????”??。?0.1.0??????,???????,???????????????,?????????????,????????,?0.2.0?,???????...GmailDefaultMaker: GmailDefaultMaker 3.0.0.2: Add QQ Mail BugfixSmart Data Access layer: Smart Data access Layer Ver 3: In this version support executing inline query is added. Check Documentation section for detail.DotNetNuke® Form and List: 06.00.04: DotNetNuke Form and List 06.00.04 Don't forget to backup your installation before upgrade. Changes in 06.00.04 Fix: Sql Scripts for 6.003 missed object qualifiers within stored procedures Fix: added missing resource "cmdCancel.Text" in form.ascx.resx Changes in 06.00.03 Fix: MakeThumbnail was broken if the application pool was configured to .Net 4 Change: Data is now stored in nvarchar(max) instead of ntext Changes in 06.00.02 The scripts are now compatible with SQL Azure, tested in a ne...Coevery - Free CRM: Coevery 1.0.0.24: Add a sample database, and installation instructions.New ProjectsAny-Service: AnyService is a .net 4.0 Windows service shell. It hosts any windows application in non-gui mode to run as a service.BabyCloudDrives - the multi cloud drive desktop's application: wpf ????BLACK ORANGE: Download The HPAD TEXT EDITOR and use it Wisely.. CodePlex New Release Checker: CodePlex New Release Checker is a small library that makes it easy to add, "New Version Available!" functionality to your CodePlex project.Collect: ????????!CSVManager: CSV??CSV?????,????CSV??,??????Exam Project: My Exam Project. Computer Vision, C and OpenCV-FTP: Hey guys thanks for checking out my ftp!Haushaltsbuch: 1ModMaker.Lua: ModMaker.Lua is an open source .NET library that parses and executes Lua code.MyJabbr: MyJabbr netduinoscope: Design shield and software to use netduino as oscilloscopeNetSurveillance Web Application: Net Surveillance Web ApplicationNiconicoApiHelper: ????API?????????OStega: A simple library for encrypt text into an bmp or png image.OURORM: ormTFS Cloud Deployment Toolkit: The TFS Cloud Deployment Toolkit is a set of tools that integrate with TFS 2010 to help manage configuration and deployment to various remote environments.The Visual Guide for Building Team Foundation Server 2012 Environments: A step-by-step guide for building Team Foundation Server 2012 environments that include SharePoint Server 2010, SQL Server 2012, Windows Server 2012 and more!WinRT LineChart: An attempt at creating an usable LineChart for everyone to use in his/her own Windows 8 Apps

    Read the article

  • Augmenting your Social Efforts via Data as a Service (DaaS)

    - by Mike Stiles
    The following is the 3rd in a series of posts on the value of leveraging social data across your enterprise by Oracle VP Product Development Don Springer and Oracle Cloud Data and Insight Service Sr. Director Product Management Niraj Deo. In this post, we will discuss the approach and value of integrating additional “public” data via a cloud-based Data-as-as-Service platform (or DaaS) to augment your Socially Enabled Big Data Analytics and CX Management. Let’s assume you have a functional Social-CRM platform in place. You are now successfully and continuously listening and learning from your customers and key constituents in Social Media, you are identifying relevant posts and following up with direct engagement where warranted (both 1:1, 1:community, 1:all), and you are starting to integrate signals for communication into your appropriate Customer Experience (CX) Management systems as well as insights for analysis in your business intelligence application. What is the next step? Augmenting Social Data with other Public Data for More Advanced Analytics When we say advanced analytics, we are talking about understanding causality and correlation from a wide variety, volume and velocity of data to Key Performance Indicators (KPI) to achieve and optimize business value. And in some cases, to predict future performance to make appropriate course corrections and change the outcome to your advantage while you can. The data to acquire, process and analyze this is very nuanced: It can vary across structured, semi-structured, and unstructured data It can span across content, profile, and communities of profiles data It is increasingly public, curated and user generated The key is not just getting the data, but making it value-added data and using it to help discover the insights to connect to and improve your KPIs. As we spend time working with our larger customers on advanced analytics, we have seen a need arise for more business applications to have the ability to ingest and use “quality” curated, social, transactional reference data and corresponding insights. The challenge for the enterprise has been getting this data inline into an easily accessible system and providing the contextual integration of the underlying data enriched with insights to be exported into the enterprise’s business applications. The following diagram shows the requirements for this next generation data and insights service or (DaaS): Some quick points on these requirements: Public Data, which in this context is about Common Business Entities, such as - Customers, Suppliers, Partners, Competitors (all are organizations) Contacts, Consumers, Employees (all are people) Products, Brands This data can be broadly categorized incrementally as - Base Utility data (address, industry classification) Public Master Reference data (trade style, hierarchy) Social/Web data (News, Feeds, Graph) Transactional Data generated by enterprise process, workflows etc. This Data has traits of high-volume, variety, velocity etc., and the technology needed to efficiently integrate this data for your needs includes - Change management of Public Reference Data across all categories Applied Big Data to extract statics as well as real-time insights Knowledge Diagnostics and Data Mining As you consider how to deploy this solution, many of our customers will be using an online “cloud” service that provides quality data and insights uniformly to all their necessary applications. In addition, they are requesting a service that is: Agile and Easy to Use: Applications integrated with the service can obtain data on-demand, quickly and simply Cost-effective: Pre-integrated into applications so customers don’t have to Has High Data Quality: Single point access to reference data for data quality and linkages to transactional, curated and social data Supports Data Governance: Becomes more manageable and cost-effective since control of data privacy and compliance can be enforced in a centralized place Data-as-a-Service (DaaS) Just as the cloud has transformed and now offers a better path for how an enterprise manages its IT from their infrastructure, platform, and software (IaaS, PaaS, and SaaS), the next step is data (DaaS). Over the last 3 years, we have seen the market begin to offer a cloud-based data service and gain initial traction. On one side of the DaaS continuum, we see an “appliance” type of service that provides a single, reliable source of accurate business data plus social information about accounts, leads, contacts, etc. On the other side of the continuum we see more of an online market “exchange” approach where ISVs and Data Publishers can publish and sell premium datasets within the exchange, with the exchange providing a rich set of web interfaces to improve the ease of data integration. Why the difference? It depends on the provider’s philosophy on how fast the rate of commoditization of certain data types will occur. How do you decide the best approach? Our perspective, as shown in the diagram below, is that the enterprise should develop an elastic schema to support multi-domain applicability. This allows the enterprise to take the most flexible approach to harness the speed and breadth of public data to achieve value. The key tenet of the proposed approach is that an enterprise carefully federates common utility, master reference data end points, mobility considerations and content processing, so that they are pervasively available. One way you may already be familiar with this approach is in how you do Address Verification treatments for accounts, contacts etc. If you design and revise this service in such a way that it is also easily available to social analytic needs, you could extend this to launch geo-location based social use cases (marketing, sales etc.). Our fundamental belief is that value-added data achieved through enrichment with specialized algorithms, as well as applying business “know-how” to weight-factor KPIs based on innovative combinations across an ever-increasing variety, volume and velocity of data, will be where real value is achieved. Essentially, Data-as-a-Service becomes a single entry point for the ever-increasing richness and volume of public data, with enrichment and combined capabilities to extract and integrate the right data from the right sources with the right factoring at the right time for faster decision-making and action within your core business applications. As more data becomes available (and in many cases commoditized), this value-added data processing approach will provide you with ongoing competitive advantage. Let’s look at a quick example of creating a master reference relationship that could be used as an input for a variety of your already existing business applications. In phase 1, a simple master relationship is achieved between a company (e.g. General Motors) and a variety of car brands’ social insights. The reference data allows for easy sort, export and integration into a set of CRM use cases for analytics, sales and marketing CRM. In phase 2, as you create more data relationships (e.g. competitors, contacts, other brands) to have broader and deeper references (social profiles, social meta-data) for more use cases across CRM, HCM, SRM, etc. This is just the tip of the iceberg, as the amount of master reference relationships is constrained only by your imagination and the availability of quality curated data you have to work with. DaaS is just now emerging onto the marketplace as the next step in cloud transformation. For some of you, this may be the first you have heard about it. Let us know if you have questions, or perspectives. In the meantime, we will continue to share insights as we can.Photo: Erik Araujo, stock.xchng

    Read the article

  • Book Review: Brownfield Application Development in .NET

    - by DotNetBlues
    I recently finished reading the book Brownfield Application Development in .NET by Kyle Baley and Donald Belcham.  The book is available from Manning.  First off, let me say that I'm a huge fan of Manning as a publisher.  I've found their books to be top-quality, over all.  As a Kindle owner, I also appreciate getting an ebook copy along with the dead tree copy.  I find ebooks to be much more convenient to read, but hard-copies are easier to reference. The book covers, surprisingly enough, working with brownfield applications.  Which is well and good, if that term has meaning to you.  It didn't for me.  Without retreading a chunk of the first chapter, the authors break code bases into three broad categories: greenfield, brownfield, and legacy.  Greenfield is, essentially, new development that hasn't had time to rust and is (hopefully) being approached with some discipline.  Legacy applications are those that are more or less stable and functional, that do not expect to see a lot of work done to them, and are more likely to be replaced than reworked. Brownfield code is the gray (brown?) area between the two and the authors argue, quite effectively, that it is the most likely state for an application to be in.  Brownfield code has, in some way, been allowed to tarnish around the edges and can be difficult to work with.  Although I hadn't realized it, most of the code I've worked on has been brownfield.  Sometimes, there's talk of scrapping and starting over.  Sometimes, the team dismisses increased discipline as ivory tower nonsense.  And, sometimes, I've been the ignorant culprit vexing my future self. The book is broken into two major sections, plus an introduction chapter and an appendix.  The first section covers what the authors refer to as "The Ecosystem" which consists of version control, build and integration, testing, metrics, and defect management.  The second section is on actually writing code for brownfield applications and discusses object-oriented principles, architecture, external dependencies, and, of course, how to deal with these when coming into an existing code base. The ecosystem section is just shy of 140 pages long and brings some real meat to the matter.  The focus on "pain points" immediately sets the tone as problem-solution, rather than academic.  The authors also approach some of the topics from a different angle than some essays I've read on similar topics.  For example, the chapter on automated testing is on just that -- automated testing.  It's all well and good to criticize a project as conflating integration tests with unit tests, but it really doesn't make anyone's life better.  The discussion on testing is more focused on the "right" level of testing for existing projects.  Sometimes, an integration test is the best you can do without gutting a section of functional code.  Even if you can sell other developers and/or management on doing so, it doesn't actually provide benefit to your customers to rewrite code that works.  This isn't to say the authors encourage sloppy coding.  Far from it.  Just that they point out the wisdom of ignoring the sleeping bear until after you deal with the snarling wolf. The other sections take a similarly real-world, workable approach to the pain points they address.  As the section moves from technical solutions like version control and continuous integration (CI) to the softer, process issues of metrics and defect tracking, the authors begin to gently suggest moving toward a zero defect count.  While that really sounds like an unreasonable goal for a lot of ongoing projects, it's quite apparent that the authors have first-hand experience with taming some gruesome projects.  The suggestions are grounded and workable, and the difficulty of some situations is explicitly acknowledged. I have to admit that I started getting bored by the end of the ecosystem section.  No matter how valuable I think a good project manager or business analyst is to a successful ALM, at the end of the day, I'm a gear-head.  Also, while I agreed with a lot of the ecosystem ideas, in theory, I didn't necessarily feel that a lot of the single-developer projects that I'm often involved in really needed that level of rigor.  It's only after reading the sidebars and commentary in the coding section that I had the context for the arguments made in favor of a strong ecosystem supporting the development process.  That isn't to say that I didn't support good product management -- indeed, I've probably pushed too hard, on occasion, for a strong ALM outside of just development.  This book gave me deeper insight into why some corners shouldn't be cut and how damaging certain sins of omission can be. The code section, though, kept me engaged for its entirety.  Many technical books can be used as reference material from day one.  The authors were clear, however, that this book is not one of these.  The first chapter of the section (chapter seven, over all) addresses object oriented (OO) practices.  I've read any number of definitions, discussions, and treatises on OO.  None of the chapter was new to me, but it was a good review, and I'm of the opinion that it's good to review the foundations of what you do, from time to time, so I didn't mind. The remainder of the book is really just about how to apply OOP to existing code -- and, just because all your code exists in classes does not mean that it's object oriented.  That topic has the potential to be extremely condescending, but the authors miraculously managed to never once make me feel like a dolt or that they were wagging their finger at me for my prior sins.  Instead, they continue the "pain points" and problem-solution presentation to give concrete examples of how to apply some pretty academic-sounding ideas.  That's a point worth emphasizing, as my experience with most OO discussions is that they stay in the academic realm.  This book gives some very, very good explanations of why things like the Liskov Substitution Principle exist and why a corporate programmer should even care.  Even if you know, with absolute certainty, that you'll never have to work on an existing code-base, I would recommend this book just for the clarity it provides on OOP. This book goes beyond just theory, or even real-world application.  It presents some methods for fixing problems that any developer can, and probably will, encounter in the wild.  First, the authors address refactoring application layers and internal dependencies.  Then, they take you through those layers from the UI to the data access layer and external dependencies.  Finally, they come full circle to tie it all back to the overall process.  By the time the book is done, you're left with a lot of ideas, but also a reasonable plan to begin to improve an existing project structure. Throughout the book, it's apparent that the authors have their own preferred methodology (TDD and domain-driven design), as well as some preferred tools.  The "Our .NET Toolbox" is something of a neon sign pointing to that latter point.  They do not beat the reader over the head with anything resembling a "One True Way" mentality.  Even for the most emphatic points, the tone is quite congenial and helpful.  With some of the near-theological divides that exist within the tech community, I found this to be one of the more remarkable characteristics of the book.  Although the authors favor tools that might be considered Alt.NET, there is no reason the advice and techniques given couldn't be quite successful in a pure Microsoft shop with Team Foundation Server.  For that matter, even though the book specifically addresses .NET, it could be applied to a Java and Oracle shop, as well.

    Read the article

  • Using Delegates in C# (Part 1)

    - by rajbk
    This post provides a very basic introduction of delegates in C#. Part 2 of this post can be read here. A delegate is a class that is derived from System.Delegate.  It contains a list of one or more methods called an invocation list. When a delegate instance is “invoked” with the arguments as defined in the signature of the delegate, each of the methods in the invocation list gets invoked with the arguments. The code below shows example with static and instance methods respectively: Static Methods 1: using System; 2: using System.Linq; 3: using System.Collections.Generic; 4: 5: public delegate void SayName(string name); 6: 7: public class Program 8: { 9: [STAThread] 10: static void Main(string[] args) 11: { 12: SayName englishDelegate = new SayName(SayNameInEnglish); 13: SayName frenchDelegate = new SayName(SayNameInFrench); 14: SayName combinedDelegate =(SayName)Delegate.Combine(englishDelegate, frenchDelegate); 15: 16: combinedDelegate.Invoke("Tom"); 17: Console.ReadLine(); 18: } 19: 20: static void SayNameInFrench(string name) { 21: Console.WriteLine("J'ai m'appelle " + name); 22: } 23: 24: static void SayNameInEnglish(string name) { 25: Console.WriteLine("My name is " + name); 26: } 27: } We have declared a delegate of type SayName with return type of void and taking an input parameter of name of type string. On line 12, we create a new instance of this delegate which refers to a static method - SayNameInEnglish.  SayNameInEnglish has the same return type and parameter list as the delegate declaration.  Once a delegate is instantiated, the instance will always refer to the same target. Delegates are immutable. On line 13, we create a new instance of the delegate but point to a different static method. As you may recall, a delegate instance encapsulates an invocation list. You create an invocation list by combining delegates using the Delegate.Combine method (there is an easier syntax as you will see later). When two non null delegate instances are combined, their invocation lists get combined to form a new invocation list. This is done in line 14.  On line 16, we invoke the delegate with the Invoke method and pass in the required string parameter. Since the delegate has an invocation list with two entries, each of the method in the invocation list is invoked. If an unhandled exception occurs during the invocation of one of these methods, the exception gets bubbled up to the line where the invocation was made (line 16). If a delegate is null and you try to invoke it, you will get a System.NullReferenceException. We see the following output when the method is run: My name is TomJ'ai m'apelle Tom Instance Methods The code below outputs the same results as before. The only difference here is we are creating delegates that point to a target object (an instance of Translator) and instance methods which have the same signature as the delegate type. The target object can never be null. We also use the short cut syntax += to combine the delegates instead of Delegate.Combine. 1: public delegate void SayName(string name); 2: 3: public class Program 4: { 5: [STAThread] 6: static void Main(string[] args) 7: { 8: Translator translator = new Translator(); 9: SayName combinedDelegate = new SayName(translator.SayNameInEnglish); 10: combinedDelegate += new SayName(translator.SayNameInFrench); 11:  12: combinedDelegate.Invoke("Tom"); 13: Console.ReadLine(); 14: } 15: } 16: 17: public class Translator { 18: public void SayNameInFrench(string name) { 19: Console.WriteLine("J'ai m'appelle " + name); 20: } 21: 22: public void SayNameInEnglish(string name) { 23: Console.WriteLine("My name is " + name); 24: } 25: } A delegate can be removed from a combination of delegates by using the –= operator. Removing a delegate from an empty list or removing a delegate that does not exist in a non empty list will not result in an exception. Delegates are invoked synchronously using the Invoke method. We can also invoke them asynchronously using the BeginInvoke and EndInvoke methods which are compiler generated.

    Read the article

  • Disaster Recovery Discovery

    - by Rodney Landrum
    Last weekend I joined several of my IT staff on a mission to perform a DR test in our remote CoLo center in a large South East city of the US. Can I be more obtuse? The goal was simple for me as the sole DBA in a throng of Windows, Storage, Network and SAN admins – restore the databases and make them work. There were 4 applications that back ended to 7 SQL Server databases on 4 different SQL Server instances. We would maintain the original server names, but beyond that it was fair game. We had time to prepare so I was able to script out or otherwise automate the recovery process. I used sp_help_revlogin for three of the servers, a bit of a cheat actually because restoring the Master database on the target DR servers was the specified course of action according to the DR procedures ( the caveat “IF REQUIRED” left it open to interpretation. I really wanted to avoid the step of restoring Master for a number of reasons but mainly because I did not want to deal with issues starting SQL Services afterward. Having to account for the location of TempDB and the version conflicts of the resource DBs were just two of the battles I chose not to fight. Not to mention other system database location problems that might arise and prevent SQL from starting.  I was going to have to restore all of the user databases anyway, so I would not really gain any benefit, outside of logins, for taking the time to restore the source Master database over the newly installed one on the fresh server. What I wanted was the ability to restore the Master database as a user database, call it Master_Mine, from a backup on the source system and then use that restored database to script the SQL Logins and passwords on the DR systems. While I did not attempt this on the trip, the thought stuck in my mind and this past week I succeeded at scripting user accounts and passwords using only a restored copy of the Master database. Granted there were several challenges to overcome.  Also, as is usual for any work like this the usual disclaimers apply:  This is not something that I would imagine Microsoft would condone or support and this was really only an experiment for me to learn if it was even possible. While I have tested the process with success, I do not know that I would use this technique in a documented procedure because future updates for SQL Server will render this technique non-functional. I thought at first, incorrectly of course, that I could use sp_help_revlogin on a restored copy of the master database I named Master_Mine.   Since sp_help_revlogin uses system schema objects, sys.syslogins and sys.server_principals, this was not going to work because all results would come from the main Master database. To test this I added a SQL login via SSMS, backed up Master, restored  it as Master_Mine, and then deleted the login.  Even though the test account I created should presumably still be in the Master_Mine database, I should be able to get to it and script out its creation with its password hash so that I would not need to know the password, but any applications that stored that password would not have to be altered in the DR scenario. They would just work as expected. Once I realized that would not work I began looking deeper.  Knowing that sys.syslogins and sys.server_principals are system views, their underlying code should be available with sp_helptext, right? They were. And this led me to discover the two tables sys.sysxlgns and sys.sysprivs, where the data I needed was stored. These tables existed in both the real Master and the restored copy, Master_Mine.  I used this information to tweak the sp_help_revlogin stored procedure to use these tables instead to create the logins cursor used in sp_help_revlogin. For the password hash,  sp_help_revlogin uses the function LoginProperty() which takes a user name and option ‘passwordhash’ to return the hash for the user. Unfortunately, it requires the login to exist in the Master database. This would not work. So another slight modification I had to make was to pull the password hash itself (pwdhash from sys.sysxlgns) into the logins cursor and comment out the section of sp_help_revlogin that uses LoginProperty. Instead, I pass the pwdhash value as the variable @PWD_varbinary to the sp_hexadecimal stored procedure which is also created by and used within the code provided by Microsoft in the link above for sp_help_revlogin. The final challenge: sys.sysxlgns and sys.server_principals are visible only within a Dedicated Administrator Connection (DAC) query window in SSMS or within SQLCDMD.  To open a DAC connection you have to be logged in on the SQL Server itself, via RDP in my case,  and you preface the server name in the query connection with ADMIN:, so that the server connection looks like ADMIN:ServerName. From there you can create the modified stored procedure in the restored copy of a Master database from a source system as whatever name you like, and then run the modified stored procedure. I named my new stored procedure usp_help_revlogin_MyMaster. Upon execution I was happy to see the logins and password hashes that I needed to apply from the source Master database without having to restore over the new Master system database and without the need to access the original server (assuming it was down due to whatever disaster put it in that state). You will note that I am not providing full code samples here of the modifications. I will say that it was a slight bit of work and anyone who needed to do this for whatever reason, could fairly easily roll their own solution with the information provided herein.  My goal, as I said was to prove that this could be done and provide another option if required to ease the burden of getting SQL Servers up and available in an emergency situation where alternatives may be more challenging or otherwise unavailable.  

    Read the article

  • TGIF: Engagement Wrap-up

    - by Michael Snow
    We've had a very busy week here at Oracle and as we build up to Oracle OpenWorld starting in less than 10 days - it doesn't look like things will be slowing down. Engagement is definitely in the air this week. Our friend, John Mancini published a great article entitled: "The World of Engagement" on his Digital Landfill blog yesterday and we hosted a great webcast with R "Ray" Wang from Constellation Research yesterday on the "9 C's of Engagement". 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} I wanted to wrap-up the week with some key takeaways from our webcast yesterday with Ray Wang. If you missed the webcast yesterday, fear not - it is now available  On-Demand. We'll leave you this week with lots of questions about how to navigate these churning waters of engagement. Stay tuned to the Oracle WebCenter Social Business Thought Leaders Webcast Series as we fuel this dialogue. 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Company Culture Does company support a culture of putting customer satisfaction ahead of profits? Does culture promote creativity and cross functional employee collaboration? Does culture accept different views of multi-generational workforce? Does culture promote employee training and skills development Does culture support upward mobility and long term retention? Does culture support work-life balance? Does the culture provide rewards for employee for outstanding customer support? Channels What are the current primary channels for customer communications? What do you think will be the primary channels in two years? Is company developing support model for emerging channels? Do all channels consistently deliver the same level of customer support? Do you know the cost per transaction across all channels? Do you engage customers proactively across multiple channels? Do all channels have access to the same customer information? Community Does company extend customer support into virtual communities of interest? Does company facilitate educating users through its virtual communities? Does company mine its customer’s experience into useful data? Does company increase the value for customers through using data to deliver new products and services? Does company support two way interactions with its customers through communities of interest? Does company actively support social CRM, online communities and social media markets? Credibility Does company market its trustworthiness through external certificates such as business licenses, BBB certificates or other validations? Does company promote trust through customer testimonials and case studies on ethical business practices? Does company promote truthful market campaigns Does company make it easy for customers to complain? Does company build its reputation for standing behind its products with guarantees for satisfaction? Does company protect its customer data with high security measures> Content What sources do you use to create customer content? Does company mine social media and blogs for customer content? How does your company sort, store and retain its customer content? How frequently does content get updated? What external sources do you use for customer content? How many responses are typically received from a knowledge management system inquiry? Does your company use customer content to design and develop new product and services? Context Does your company market to customers in clusters or individually? Does your company customize its messages and personalize them to specific needs of each individual customer? Does your company store customer data based on their past behaviors, purchases, sentiment analysis and current activities? Does your company manage customer context according to channels used? For example identify personal use channels versus business channels? What is your frequency of collecting customer activities across various touch points? How is your customer data stored and analyzed? Is contextual data used for future customer outreach? Cadence Which channels does your company measure-web site visits, phone calls, IVR, store visits, face to face, social media? Does company make effective use of cross channel marketing to promote more frequent customer engagement? Does your company rate the patterns relevant for your product or service and monitor usage against this pattern? Does your company measure the frequency of both online and offline channels? Does your company apply metrics to the frequency of customer engagements with product or services revenues? Does your company consolidate data for customer engagement across various channels for a complete view of its customer? Catalyst Does company offer coupon discounts? Does company have a customer loyalty program or a VIP membership program? Does company mine customer data to target specific groups of buyers? Do internal employees serve as ambassadors for customer programs? Does company drive loyalty through social media loyalty programs? Does company build rewards based on using loyalty data? Does company offer an employee incentive program to drive customer loyalty?

    Read the article

  • creating a pre-menu level select screen

    - by Ephiras
    Hi I am working on creating a tower Defence java applet game and have come to a road block about implementing a title screen that i can select the level and difficulty of the rest of the game. my title screen class is called Menu. from this menu class i need to pass in many different variables into my Main class. i have used different classes before and know how to run them and such. but if both classes extend applet and each has its individual graphics method how can i run things from Main even though it was created in Menu. what i essentially want to do is run the Menu class withits action listeners and graphics until a Difficulty button has been selected, run the main class (which 100% works without having to have the Menu class) and pretty much terminate Menu so that i cannot go back to it, do not see its buttons or graphics menus. can i run one applet annd when i choose a button close that one and launch the other one? IF you would like to download the full project you can find it here, i had to comment out all the code that wasn't working my Menu class import java.awt.*; import java.awt.event.*; import java.applet.*; public class Menu extends Applet implements ActionListener{ Button bEasy,bMed,bHard; Main m; public void init(){ bEasy= new Button("Easy"); bEasy.setBounds(140,200,100,50); add(bEasy); bMed = new Button("Medium");bMed.setBounds(280,200,100,50); add(bMed); bHard= new Button("Hard");bHard.setBounds(420,200,100,50); add(bHard); setLayout(null); } public void actionPerformed(ActionEvent e){ Main m = new Main(20,10,3000,mapMed);//break; switch (e.getSource()){ case bEasy: Main m = new Main(6000,20,"levels/levelEasy.png");break;//enimies tower money world case bMed: Main m = new Main(4000,15,"levels/levelMed.png");break; case bHard: Main m = new Main(2000,10,"levels/levelEasy.png");break; default: break; } } public void paint(){ //m.draw(g) } } and here is my main class initialising code. import java.awt.*; import java.awt.event.*; import java.applet.*; import java.io.IOException; public class Main extends Applet implements Runnable, MouseListener, MouseMotionListener, ActionListener{ Button startButton, UpgRange, UpgDamage; //set up the buttons Color roadCol,startCol,finCol,selGrass,selRoad; //set up the colors Enemy e[][]; Tower t[]; Image towerpic,backpic,roadpic,levelPic; private Image i; private Graphics doubleG; //here is the world 0=grass 1=road 2=start 3=end int world[][],eStartX,eStartY; boolean drawMouse,gameEnd; static boolean start=false; static int gridLength=15; static int round=0; int Mx,My,timer=1500; static int sqrSize=31; int towers=0,towerSelected=-10; static int castleHealth=2000; String levelPath; //choose the level Easy Med or Hard int maxEnemy[] = {5,7,12,20,30,15,50,30,40,60};//number of enimies per round int maxTowers=15;//maximum number of towers allowed static int money =2000,damPrice=600,ranPrice=350,towerPrice=700; //money = the intial ammount of money you start of with //damPrice is the price to increase the damage of a tower //ranPrice is the price to increase the range of a tower public void main(int cH,int mT,int mo,int dP,int rP,int tP,String path,int[] mE)//constructor 1 castleHealth=cH; maxTowers=mT; money=mo; damPrice=dP; ranPrice=rP; towerPrice=tP; String levelPath=path; maxEnemy = mE; buildLevel(); } public void main(int cH,int mT,String path)//basic constructor castleHealth=cH; maxTowers=mT; String levelPath=path; maxEnemy = mE; buildLevel(); } public void init(){ setSize(sqrSize*15+200,sqrSize*15);//set the size of the screen roadCol = new Color(255,216,0);//set the colors for the different objects startCol = new Color(0,38,255); finCol = new Color(255,0,0); selRoad = new Color(242,204,155);//selColor is the color of something when your mouse hovers over it selGrass = new Color(0,190,0); roadpic = getImage(getDocumentBase(),"images/road.jpg"); towerpic = getImage(getDocumentBase(),"images/tower.png"); backpic = getImage(getDocumentBase(),"images/grass.jpg"); levelPic = getImage(getDocumentBase(),"images/level.jpg"); e= new Enemy[maxEnemy.length][];//activates all of the enimies for (int r=0;r<e.length;r++) e[r] = new Enemy[maxEnemy[r]]; t= new Tower[maxTowers]; for (int i=0;i<t.length;i++) t[i]= new Tower();//activates all the towers for (int i=0;i<e.length; i++)//sets all of the enimies starting co ordinates for (int j=0;j<e[i].length;j++) e[i][j] = new Enemy(eStartX,eStartY,world); initButtons();//initialise all the buttons addMouseMotionListener(this); addMouseListener(this); }

    Read the article

  • How do I cleanly design a central render/animation loop?

    - by mtoast
    I'm learning some graphics programming, and am in the midst of my first such project of any substance. But, I am really struggling at the moment with how to architect it cleanly. Let me explain. To display complicated graphics in my current language of choice (JavaScript -- have you heard of it?), you have to draw graphical content onto a <canvas> element. And to do animation, you must clear the <canvas> after every frame (unless you want previous graphics to remain). Thus, most canvas-related JavaScript demos I've seen have a function like this: function render() { clearCanvas(); // draw stuff here requestAnimationFrame(render); } render, as you may surmise, encapsulates the drawing of a single frame. What a single frame contains at a specific point in time, well... that is determined by the program state. So, in order for my program to do its thing, I just need to look at the state, and decide what to render. Right? Right. But that is more complicated than it seems. My program is called "Critter Clicker". In my program, you see several cute critters bouncing around the screen. Clicking on one of them agitates it, making it bounce around even more. There is also a start screen, which says "Click to start!" prior to the critters being displayed. Here are a few of the objects I'm working with in my program: StartScreenView // represents the start screen CritterTubView // represents the area in which the critters live CritterList // a collection of all the critters Critter // a single critter model CritterView // view of a single critter Nothing too egregious with this, I think. Yet, when I set out to flesh out my render function, I get stuck, because everything I write seems utterly ugly and reminiscent of a certain popular Italian dish. Here are a couple of approaches I've attempted, with my internal thought process included, and unrelated bits excluded for clarity. Approach 1: "It's conditions all the way down" // "I'll just write the program as I think it, one frame at a time." if (assetsLoaded) { if (userClickedToStart) { if (critterTubDisplayed) { if (crittersDisplayed) { forEach(crittersList, function(c) { if (c.wasClickedRecently) { c.getAgitated(); } }); } else { displayCritters(); } } else { displayCritterTub(); } } else { displayStartScreen(); } } That's a very much simplified example. Yet even with only a fraction of all the rendering conditions visible, render is already starting to get out of hand. So, I dispense with that and try another idea: Approach 2: Under the Rug // "Each view object shall be responsible for its own rendering. // "I'll pass each object the program state, and each can render itself." startScreen.render(state); critterTub.render(state); critterList.render(state); In this setup, I've essentially just pushed those crazy nested conditions to a deeper level in the code, hiding them from view. In other words, startScreen.render would check state to see if it needed actually to be drawn or not, and take the correct action. But this seems more like it only solves a code-aesthetic problem. The third and final approach I'm considering that I'll share is the idea that I could invent my own "wheel" to take care of this. I'm envisioning a function that takes a data structure that defines what should happen at any given point in the render call -- revealing the conditions and dependencies as a kind of tree. Approach 3: Mad Scientist renderTree({ phases: ['startScreen', 'critterTub', 'endCredits'], dependencies: { startScreen: ['assetsLoaded'], critterTub: ['startScreenClicked'], critterList ['critterTubDisplayed'] // etc. }, exclusions: { startScreen: ['startScreenClicked'], // etc. } }); That seems kind of cool. I'm not exactly sure how it would actually work, but I can see it being a rather nifty way to express things, especially if I flex some of JavaScript's events. In any case, I'm a little bit stumped because I don't see an obvious way to do this. If you couldn't tell, I'm coming to this from the web development world, and finding that doing animation is a bit more exotic than arranging an MVC application for handling simple requests - responses. What is the clean, established solution to this common-I-would-think problem?

    Read the article

  • OpenGL problem with FBO integer texture and color attachment

    - by Grieverheart
    In my simple renderer, I have 2 FBOs one that contains diffuse, normals, instance ID and depth in that order and one that I use store the ssao result. The textures I use for the first FBO are RGB8, RGBA16F, R32I and GL_DEPTH_COMPONENT32F for the depth. For the second FBO I use an R16F texture. My rendering process is to first render to everything I mentioned in the first FBO, then bind depth and normals textures for reading for the ssao pass and write to the second FBO. After that I bind the second FBO's texture for reading in my blur shader and bind the first FBO for writing. What I intend to do is to write the blurred ssao value to the alpha component of the Normals texture. Here are where the problems start. First of all, I use shading language 3.3, which my graphics card does support. I manage ouputs in my shaders using layout(location = #). Now, the normals texture should be bound to color attachment 1, but when I use 1, it seems to write to my diffuse texture which should be in color attachment 0. When I instead use layout(location = 0), it gets correctly written to my normals texture. Besides this, my instance ID texture also gets resets after running the blur shader which is weird because if I use a float texture and write to it instanceID / nInstances, the texture doesn't get reset after the blur shader has ran. Here is how I prepare my first FBO: bool CGBuffer::Init(unsigned int WindowWidth, unsigned int WindowHeight){ //Create FBO glGenFramebuffers(1, &m_fbo); glBindFramebuffer(GL_DRAW_FRAMEBUFFER, m_fbo); //Create gbuffer and Depth Buffer Textures glGenTextures(GBUFF_NUM_TEXTURES, &m_textures[0]); glGenTextures(1, &m_depthTexture); //prepare gbuffer for(unsigned int i = 0; i < GBUFF_NUM_TEXTURES; i++){ glBindTexture(GL_TEXTURE_2D, m_textures[i]); if(i == GBUFF_TEXTURE_TYPE_NORMAL) glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, WindowWidth, WindowHeight, 0, GL_RGBA, GL_FLOAT, NULL); else if(i == GBUFF_TEXTURE_TYPE_DIFFUSE) glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, WindowWidth, WindowHeight, 0, GL_RGB, GL_FLOAT, NULL); else if(i == GBUFF_TEXTURE_TYPE_ID) glTexImage2D(GL_TEXTURE_2D, 0, GL_R32I, WindowWidth, WindowHeight, 0, GL_RED_INTEGER, GL_INT, NULL); else{ std::cout << "Error in FBO initialization" << std::endl; return false; } glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, GL_TEXTURE_2D, m_textures[i], 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); } //prepare depth buffer glBindTexture(GL_TEXTURE_2D, m_depthTexture); glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, WindowWidth, WindowHeight, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL); glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_depthTexture, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_NONE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); GLenum DrawBuffers[] = {GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2}; glDrawBuffers(GBUFF_NUM_TEXTURES, DrawBuffers); GLenum Status = glCheckFramebufferStatus(GL_FRAMEBUFFER); if(Status != GL_FRAMEBUFFER_COMPLETE){ std::cout << "FB error, status 0x" << std::hex << Status << std::endl; return false; } //Restore default framebuffer glBindFramebuffer(GL_FRAMEBUFFER, 0); return true; } where I use an enum defined as, enum GBUFF_TEXTURE_TYPE{ GBUFF_TEXTURE_TYPE_DIFFUSE, GBUFF_TEXTURE_TYPE_NORMAL, GBUFF_TEXTURE_TYPE_ID, GBUFF_NUM_TEXTURES }; Am I missing some kind of restriction? Does the color attachment of the FBO's textures somehow gets reset i.e. I'm using a re-size function which re-sizes the textures of the FBO but should I perhaps call glFramebufferTexture2D again too? EDIT: Here is the shader in question: #version 330 core uniform sampler2D aoSampler; uniform vec2 TEXEL_SIZE; // x = 1/res x, y = 1/res y uniform bool use_blur; noperspective in vec2 TexCoord; layout(location = 0) out vec4 out_AO; void main(void){ if(use_blur){ float result = 0.0; for(int i = -1; i < 2; i++){ for(int j = -1; j < 2; j++){ vec2 offset = vec2(TEXEL_SIZE.x * i, TEXEL_SIZE.y * j); result += texture(aoSampler, TexCoord + offset).r; // -0.004 because the texture seems to be a bit displaced } } out_AO = vec4(vec3(0.0), result / 9); } else out_AO = vec4(vec3(0.0), texture(aoSampler, TexCoord).r); }

    Read the article

  • MVVM - how to make creating viewmodels at runtime less painfull

    - by Mr Happy
    I apologize for the long question, it reads a bit as a rant, but I promise it's not! I've summarized my question(s) below In the MVC world, things are straightforward. The Model has state, the View shows the Model, and the Controller does stuff to/with the Model (basically), a controller has no state. To do stuff the Controller has some dependencies on web services, repository, the lot. When you instantiate a controller you care about supplying those dependencies, nothing else. When you execute an action (method on Controller), you use those dependencies to retrieve or update the Model or calling some other domain service. If there's any context, say like some user wants to see the details of a particular item, you pass the Id of that item as parameter to the Action. Nowhere in the Controller is there any reference to any state. So far so good. Enter MVVM. I love WPF, I love data binding. I love frameworks that make data binding to ViewModels even easier (using Caliburn Micro a.t.m.). I feel things are less straightforward in this world though. Let's do the exercise again: the Model has state, the View shows the ViewModel, and the ViewModel does stuff to/with the Model (basically), a ViewModel does have state! (to clarify; maybe it delegates all the properties to one or more Models, but that means it must have a reference to the model one way or another, which is state in itself) To do stuff the ViewModel has some dependencies on web services, repository, the lot. When you instantiate a ViewModel you care about supplying those dependencies, but also the state. And this, ladies and gentlemen, annoys me to no end. Whenever you need to instantiate a ProductDetailsViewModel from the ProductSearchViewModel (from which you called the ProductSearchWebService which in turn returned IEnumerable<ProductDTO>, everybody still with me?), you can do one of these things: call new ProductDetailsViewModel(productDTO, _shoppingCartWebService /* dependcy */);, this is bad, imagine 3 more dependencies, this means the ProductSearchViewModel needs to take on those dependencies as well. Also changing the constructor is painfull. call _myInjectedProductDetailsViewModelFactory.Create().Initialize(productDTO);, the factory is just a Func, they are easily generated by most IoC frameworks. I think this is bad because Init methods are a leaky abstraction. You also can't use the readonly keyword for fields that are set in the Init method. I'm sure there are a few more reasons. call _myInjectedProductDetailsViewModelAbstractFactory.Create(productDTO); So... this is the pattern (abstract factory) that is usually recommended for this type of problem. I though it was genious since it satisfies my craving for static typing, until I actually started using it. The amount of boilerplate code is I think too much (you know, apart from the ridiculous variable names I get use). For each ViewModel that needs runtime parameters you'll get two extra files (factory interface and implementation), and you need to type the non-runtime dependencies like 4 extra times. And each time the dependencies change, you get to change it in the factory as well. It feels like I don't even use an DI container anymore. (I think Castle Windsor has some kind of solution for this [with it's own drawbacks, correct me if I'm wrong]). do something with anonymous types or dictionary. I like my static typing. So, yeah. Mixing state and behavior in this way creates a problem which don't exist at all in MVC. And I feel like there currently isn't a really adequate solution for this problem. Now I'd like to observe some things: People actually use MVVM. So they either don't care about all of the above, or they have some brilliant other solution. I haven't found an indepth example of MVVM with WPF. For example, the NDDD-sample project immensely helped me understand some DDD concepts. I'd really like it if someone could point me in the direction of something similar for MVVM/WPF. Maybe I'm doing MVVM all wrong and I should turn my design upside down. Maybe I shouldn't have this problem at all. Well I know other people have asked the same question so I think I'm not the only one. To summarize Am I correct to conclude that having the ViewModel being an integration point for both state and behavior is the reason for some difficulties with the MVVM pattern as a whole? Is using the abstract factory pattern the only/best way to instantiate a ViewModel in a statically typed way? Is there something like an in depth reference implementation available? Is having a lot of ViewModels with both state/behavior a design smell?

    Read the article

  • Building an OpenStack Cloud for Solaris Engineering, Part 1

    - by Dave Miner
    One of the signature features of the recently-released Solaris 11.2 is the OpenStack cloud computing platform.  Over on the Solaris OpenStack blog the development team is publishing lots of details about our version of OpenStack Havana as well as some tips on specific features, and I highly recommend reading those to get a feel for how we've leveraged Solaris's features to build a top-notch cloud platform.  In this and some subsequent posts I'm going to look at it from a different perspective, which is that of the enterprise administrator deploying an OpenStack cloud.  But this won't be just a theoretical perspective: I've spent the past several months putting together a deployment of OpenStack for use by the Solaris engineering organization, and now that it's in production we'll share how we built it and what we've learned so far.In the Solaris engineering organization we've long had dedicated lab systems dispersed among our various sites and a home-grown reservation tool for developers to reserve those systems; various teams also have private systems for specific testing purposes.  But as a developer, it can still be difficult to find systems you need, especially since most Solaris changes require testing on both SPARC and x86 systems before they can be integrated.  We've added virtual resources over the years as well in the form of LDOMs and zones (both traditional non-global zones and the new kernel zones).  Fundamentally, though, these were all still deployed in the same model: our overworked lab administrators set up pre-configured resources and we then reserve them.  Sounds like pretty much every traditional IT shop, right?  Which means that there's a lot of opportunity for efficiencies from greater use of virtualization and the self-service style of cloud computing.  As we were well into development of OpenStack on Solaris, I was recruited to figure out how we could deploy it to both provide more (and more efficient) development and test resources for the organization as well as a test environment for Solaris OpenStack.At this point, let's acknowledge one fact: deploying OpenStack is hard.  It's a very complex piece of software that makes use of sophisticated networking features and runs as a ton of service daemons with myriad configuration files.  The web UI, Horizon, doesn't often do a good job of providing detailed errors.  Even the command-line clients are not as transparent as you'd like, though at least you can turn on verbose and debug messaging and often get some clues as to what to look for, though it helps if you're good at reading JSON structure dumps.  I'd already learned all of this in doing a single-system Grizzly-on-Linux deployment for the development team to reference when they were getting started so I at least came to this job with some appreciation for what I was taking on.  The good news is that both we and the community have done a lot to make deployment much easier in the last year; probably the easiest approach is to download the OpenStack Unified Archive from OTN to get your hands on a single-system demonstration environment.  I highly recommend getting started with something like it to get some understanding of OpenStack before you embark on a more complex deployment.  For some situations, it may in fact be all you ever need.  If so, you don't need to read the rest of this series of posts!In the Solaris engineering case, we need a lot more horsepower than a single-system cloud can provide.  We need to support both SPARC and x86 VM's, and we have hundreds of developers so we want to be able to scale to support thousands of VM's, though we're going to build to that scale over time, not immediately.  We also want to be able to test both Solaris 11 updates and a release such as Solaris 12 that's under development so that we can work out any upgrade issues before release.  One thing we don't have is a requirement for extremely high availability, at least at this point.  We surely don't want a lot of down time, but we can tolerate scheduled outages and brief (as in an hour or so) unscheduled ones.  Thus I didn't need to spend effort on trying to get high availability everywhere.The diagram below shows our initial deployment design.  We're using six systems, most of which are x86 because we had more of those immediately available.  All of those systems reside on a management VLAN and are connected with a two-way link aggregation of 1 Gb links (we don't yet have 10 Gb switching infrastructure in place, but we'll get there).  A separate VLAN provides "public" (as in connected to the rest of Oracle's internal network) addresses, while we use VxLANs for the tenant networks. One system is more or less the control node, providing the MySQL database, RabbitMQ, Keystone, and the Nova API and scheduler as well as the Horizon console.  We're curious how this will perform and I anticipate eventually splitting at least the database off to another node to help simplify upgrades, but at our present scale this works.I had a couple of systems with lots of disk space, one of which was already configured as the Automated Installation server for the lab, so it's just providing the Glance image repository for OpenStack.  The other node with lots of disks provides Cinder block storage service; we also have a ZFS Storage Appliance that will help back-end Cinder in the near future, I just haven't had time to get it configured in yet.There's a separate system for Neutron, which is our Elastic Virtual Switch controller and handles the routing and NAT for the guests.  We don't have any need for firewalling in this deployment so we're not doing so.  We presently have only two tenants defined, one for the Solaris organization that's funding this cloud, and a separate tenant for other Oracle organizations that would like to try out OpenStack on Solaris.  Each tenant has one VxLAN defined initially, but we can of course add more.  Right now we have just a single /24 network for the floating IP's, once we get demand up to where we need more then we'll add them.Finally, we have started with just two compute nodes; one is an x86 system, the other is an LDOM on a SPARC T5-2.  We'll be adding more when demand reaches the level where we need them, but as we're still ramping up the user base it's less work to manage fewer nodes until then.My next post will delve into the details of building this OpenStack cloud's infrastructure, including how we're using various Solaris features such as Automated Installation, IPS packaging, SMF, and Puppet to deploy and manage the nodes.  After that we'll get into the specifics of configuring and running OpenStack itself.

    Read the article

  • A Visual Studio Release Grows in Brooklyn

    - by andrewbrust
    Yesterday, Microsoft held its flagship launch event for Office 2010 in Manhattan.  Today, the Redmond software company is holding a local launch event for Visual Studio (VS) 2010, in Brooklyn.  How come information workers get the 212 treatment and developers are relegated to 718? Well, here’s the thing: the Brooklyn Marriott is actually a great place for an event, but you need some intimate knowledge of New York City to know that.  NBC’s Studio 8H, where the Office launch was held yesterday (and from where SNL is broadcast) is a pretty small venue, but you’d need some inside knowledge to recognize that.  Likewise, while Office 2010 is a product whose value is apparent.  Appreciating VS 2010’s value takes a bit more savvy.  Setting aside its year-based designation, this release of VS, counting the old Visual Basic releases, is the 10th version of the product.  How can a developer audience get excited about an integrated development environment when it reaches double-digit version numbers?  Well, it can be tough.  Luckily, Microsoft sent Jay Schmelzer, a Group Program Manager from the Visual Studio team in Redmond, to come tell the Brooklyn audience why they should be excited. Turns out there’s a lot of reasons.  Support fro SharePoint development is a big one.  In previous versions of VS, that support has been anemic, at best.  Shortage of SharePoint developers is a huge issue in the industry, and this should help.  There’s also built in support for Windows Azure (Microsoft’s cloud platform) and, through a download, support for the forthcoming Windows Phone 7 platform.  ASP.NET MVC, a “close-to-the-metal” Web development option that does away with the Web Forms abstraction layer, has a first-class presence in VS.  So too does jQuery, the Open Source environment that makes JavaScript development a breeze.  The jQuery support is so good that Microsoft now contributes to that Open Source project and offers IntelliSense support for it in the code editor. Speaking of the VS code editor, it now supports multi-monitor setups, zoom-in, and block selection.  If you’re not a developer, this may sound confusing and minute.  I’ll just say that for people who are developers these are little things that really contribute to productivity, and that translates into lower development costs. The really cool demo, though, was around Visual Studio 2010’s new debugging features.  This stuff is hard to showcase, but I believe it’s truly breakthrough technology: imagine being able to step backwards in time to see what might have caused a bug.  Cool?  Now imagine being able to do that, even if you weren’t the tester and weren’t present while the testing was being done.  Then imagine being able to see a video screen capture of what the tester was doing with your app when the bug occurred.  VS 2010 allows all that.  This could be the demise of the IWOMM (“it works on my machine”) syndrome. After the keynote, I asked Schmelzer if any of Microsoft’s competitors have debugging tools that come close to VS 2010’s.  His answer was an earnest “we don’t think so.”  If that’s true, that’s a big deal, and a huge advantage for developer teams who adopt it.  It will make software development much cheaper and more efficient.  Kind of like holding a launch event at the Brooklyn Marriott instead of 30 Rock in Manhattan! VS 2010 (version 10) and Office 2010 (version 14) aren’t the only new product versions Microsoft is releasing right now.  There’s also SQL Server 2008 R2 (version 10.5), Exchange 2010 (version 8, I believe), SharePoint 2010 (version 4) and, of course, Windows 7.  With so many new versions at such levels of maturity, I think it’s fair to say Microsoft has reached middle-age.  How does a company stave off a potential mid-life crisis, especially when with young Turks like Google coming along and competing so fiercely?  Hard to say.  But if focusing on core value, including value that’s hard to play into a sexy demo, is part oft the answer, then Microsoft’s doing OK.  And if some new tricks, like Windows Phone 7, can gain some traction, that might round things out nicely. Are the legacy products old tricks, or are they revised classics?  I honestly don’t know, because it’s the market’s prerogative to pass that judgement.  I can say this though: based on today’s show, I think Microsoft’s been doing its homework.

    Read the article

  • Moving DataSets through BizTalk

    - by EltonStoneman
    [Source: http://geekswithblogs.net/EltonStoneman] Yuck. But sometimes you have to, so here are a couple of things to bear in mind: Schemas Point a codegen tool at a WCF endpoint which exposes a DataSet and it will generate an XSD which describes the DataSet like this: <xs:elementminOccurs="0"name="GetDataSetResult"nillable="true">  <xs:complexType>     <xs:annotation>       <xs:appinfo>         <ActualTypeName="DataSet"                     Namespace="http://schemas.datacontract.org/2004/07/System.Data"                     xmlns="http://schemas.microsoft.com/2003/10/Serialization/" />       </xs:appinfo>     </xs:annotation>     <xs:sequence>       <xs:elementref="xs:schema" />       <xs:any />     </xs:sequence>  </xs:complexType> </xs:element>  In a serialized instance, the element of type xs:schema contains a full schema which describes the structure of the DataSet – tables, columns etc. The second element, of type xs:any, contains the actual content of the DataSet, expressed as DiffGrams: <GetDataSetResult>  <xs:schemaid="NewDataSet"xmlns:xs="http://www.w3.org/2001/XMLSchema"xmlns=""xmlns:msdata="urn:schemas-microsoft-com:xml-msdata">     <xs:elementname="NewDataSet"msdata:IsDataSet="true"msdata:UseCurrentLocale="true">       <xs:complexType>         <xs:choiceminOccurs="0"maxOccurs="unbounded">           <xs:elementname="Table1">             <xs:complexType>               <xs:sequence>                 <xs:elementname="Id"type="xs:string"minOccurs="0" />                 <xs:elementname="Name"type="xs:string"minOccurs="0" />                 <xs:elementname="Date"type="xs:string"minOccurs="0" />               </xs:sequence>             </xs:complexType>           </xs:element>         </xs:choice>       </xs:complexType>     </xs:element>  </xs:schema>  <diffgr:diffgramxmlns:diffgr="urn:schemas-microsoft-com:xml-diffgram-v1"xmlns:msdata="urn:schemas-microsoft-com:xml-msdata">     <NewDataSetxmlns="">       <Table1diffgr:id="Table11"msdata:rowOrder="0"diffgr:hasChanges="inserted">         <Id>377fdf8d-cfd1-4975-a167-2ddb41265def</Id>         <Name>157bc287-f09b-435f-a81f-2a3b23aff8c4</Name>         <Date>a5d78d83-6c9a-46ca-8277-f2be8d4658bf</Date>       </Table1>     </NewDataSet>  </diffgr:diffgram> </GetDataSetResult> Put the XSD into a BizTalk schema and it will fail to compile, giving you error: The 'http://www.w3.org/2001/XMLSchema:schema' element is not declared. You should be able to work around that, but I've had no luck in BizTalk Server 2006 R2 – instead you can safely change that xs:schema element to be another xs:any type: <xs:elementminOccurs="0"name="GetDataSetResult"nillable="true">  <xs:complexType>     <xs:sequence>       <xs:any />       <xs:any />     </xs:sequence>  </xs:complexType> </xs:element>  (This snippet omits the annotation, but you can leave it in the schema). For an XML instance to pass validation through the schema, you'll also need to flag the any attributes so they can contain any namespace and skip validation:  <xs:elementminOccurs="0"name="GetDataSetResult"nillable="true">  <xs:complexType>     <xs:sequence>       <xs:anynamespace="##any"processContents="skip" />       <xs:anynamespace="##any"processContents="skip" />     </xs:sequence>  </xs:complexType> </xs:element>  You should now have a compiling schema which can be successfully tested against a serialised DataSet. Transforms If you're mapping a DataSet element between schemas, you'll need to use the Mass Copy Functoid to populate the target node from the contents of both the xs:any type elements on the source node: This should give you a compiled map which you can test against a serialized instance. And if you have a .NET consumer on the other side of the mapped BizTalk output, it will correctly deserialize the response into a DataSet.

    Read the article

  • Advice for a distracted, unhappy, recently graduated programmer? [closed]

    - by Re-Invent
    I graduated 4 months ago. I had offers from a few good places to work at. At the same time I wanted to stick to building a small software business of my own, still have some ideas with good potential, some half done projects frozen in my github. But due to social pressures, I chose a job, the pay is great, but I am half-passionate about it. A small team of smart folks building useful product, working out contracts across the world. I've started finding it extremely boring. Boring to the extent that I skip 2-3 days a week together not doing work. Neither do I spend that time progressing any of my own projects. Yes, I feel stupid at the way I'm wasting time, but I don't understand exactly why is it happening. It's as if all the excitement has been drained. What can I do about it? Long version: School - I was in third standard. Only students, 6th grade had access to computer labs. I once peeked into the lab from the little door opening. No hard-disks, MS DOS on 5 1/2 inch floppies. I asked a senior student to play some sound in BASIC. He used PLAY to compose a tune. Boy! I was so excited, I was jumping from within. Back home, asked my brother to teach me some programming. We bought a book "MODERN All About GW-BASIC for Schools & Colleges". The book had everything, right from printing, to taking input, file i/o, game programming, machine level support, etc. I was in 6th standard, wrote my first game - a wheel of fortune, rotated the wheel by manipulating 16 color palette's definition. Got internet soon, got hooked to QuickBasic programming community. Made some more games "007 in Danger", "Car Crush 2" for submission to allbasiccode archives. I was extremely excited about all this. My interests now swayed into "hacking" (computer security). Taught myself some perl, found it annoying, learnt PHP and a bit of SQL. Also taught myself Visual Basic one of the winters and wrote a pacman clone with Direct X. By the time I was in 10th standard, I created some evil tools using visual basic, php and mysql and eventually landed myself into an unpaid side-job at a government facility, building evil tools for them. It was a dream come true for crackers of that time. And so was I, still very excited. Things changed soon, last two years of school were not so great as I was balancing preps for college, work at govt. and studies for school at same time. College - College was opposite of all I had wished it to be. I imagined it to be a place where I'd spend my 4 years building something awesome. It was rather an epitome of rote learning, attendance, rules, busy schedules, ban on personal laptops, hardly any hackers surrounding you and shit like that. We had to take permissions to even introduce some cultural/creative activities in our annual schedule. The labs won't be open on weekends because the lab employees had to have their leaves. Yes, a horrible place for someone like me. I still managed to pull out a project with a friend over 2 months. Showed it to people high in the academia hierarchy. They were immensely impressed, we proposed to allow personal computers for students. They made up half-assed reasons and didn't agree. We felt frustrated. And so on, I still managed to teach myself new languages, do new projects of my own, do an intern at the same govt. facility, start a small business for sometime, give a talk at a conference I'm passionate about, win game-dev and hacking contest at most respected colleges, solve good deal of programming contest problems, etc. At the same time I was not content with all these restrictions, great emphasis on rote learning, and sheer wastage of time due to college. I never felt I was overdoing, but now I feel I burnt myself out. During my last days at college, I did an intern at a bigco. While I spent my time building prototypes for certain LBS, the other interns around me, even a good friend, was just skipping time. I thought maybe, in a few weeks he would put in some serious efforts at work assigned to him, but all he did was to find creative ways to skip work, hide his face from manager, engage people in talks if they try to question his progress, etc. I tried a few time to get him on track, but it seems all he wanted was to "not to work hard at all and still reap the fruits". I don't know how others take such people, but I find their vicinity very very poisonous to one's own motivation and productivity. Over that, the place where I come from, HRs don't give much value to what have you done past 4 years. So towards the end of out intern, we all were offered work at the bigco, but the slacker, even after not writing more than 200 lines of code was made a much better offer. I felt enraged instantly - "Is this how the corp world treats someone who does fruitful, if not extra-ordinary work form them for past 6 months?". Yes, I did try to negotiate and debate. The bigcos seem blind due to departmentalization of responsibilities and many layers of management. I decided not to be in touch with any characters of that depressing play. Probably the busy time I had at college, ignoring friends, ignoring fun and squeezing every bit of free time for myself is also responsible. Probably this is what has drained all my willingness to work for anyone. I find my day job boring, at the same time I with to maintain it for financial reasons. I feel a bit burnt out, unsatisfied and at the same time an urge to quit working for someone else and start finishing my frozen side-projects (which may be profitable). Though I haven't got much to support myself with food, office, internet bills, etc in savings. I still have my day job, but I don't find it very interesting, even though the pay is higher than the slacker, I don't find money to be a great motivator here. I keep comparing myself to my past version. I wonder how to get rid of this and reboot myself back to the way I was in school days - excited about it, tinkering, building, learning new things daily, and NOT BORED?

    Read the article

  • Migrating Core Data to new UIManagedDocument in iOS 5

    - by samerpaul
    I have an app that has been on the store since iOS 3.1, so there is a large install base out there that still uses Core Data loaded up in my AppDelegate. In the most recent set of updates, I raised the minimum version to 4.3 but still kept the same way of loading the data. Recently, I decided it's time to make the minimum version 5.1 (especially with 6 around the corner), so I wanted to start using the new fancy UIManagedDocument way of using Core Data. The issue with this though is that the old database file is still sitting in the iOS app, so there is no migrating to the new document. You have to basically subclass UIManagedDocument with a new model class, and override a couple of methods to do it for you. Here's a tutorial on what I did for my app TimeTag.  Step One: Add a new class file in Xcode and subclass "UIManagedDocument" Go ahead and also add a method to get the managedObjectModel out of this class. It should look like:   @interface TimeTagModel : UIManagedDocument   - (NSManagedObjectModel *)managedObjectModel;   @end   Step two: Writing the methods in the implementation file (.m) I first added a shortcut method for the applicationsDocumentDirectory, which returns the URL of the app directory.  - (NSURL *)applicationDocumentsDirectory {     return [[[NSFileManagerdefaultManager] URLsForDirectory:NSDocumentDirectoryinDomains:NSUserDomainMask] lastObject]; }   The next step was to pull the managedObjectModel file itself (.momd file). In my project, it's called "minimalTime". - (NSManagedObjectModel *)managedObjectModel {     NSString *path = [[NSBundlemainBundle] pathForResource:@"minimalTime"ofType:@"momd"];     NSURL *momURL = [NSURL fileURLWithPath:path];     NSManagedObjectModel *managedObjectModel = [[NSManagedObjectModel alloc] initWithContentsOfURL:momURL];          return managedObjectModel; }   After that, I need to check for a legacy installation and migrate it to the new UIManagedDocument file instead. This is the overridden method: - (BOOL)configurePersistentStoreCoordinatorForURL:(NSURL *)storeURL ofType:(NSString *)fileType modelConfiguration:(NSString *)configuration storeOptions:(NSDictionary *)storeOptions error:(NSError **)error {     // If legacy store exists, copy it to the new location     NSURL *legacyPersistentStoreURL = [[self applicationDocumentsDirectory] URLByAppendingPathComponent:@"minimalTime.sqlite"];          NSFileManager* fileManager = [NSFileManagerdefaultManager];     if ([fileManager fileExistsAtPath:legacyPersistentStoreURL.path])     {         NSLog(@"Old db exists");         NSError* thisError = nil;         [fileManager replaceItemAtURL:storeURL withItemAtURL:legacyPersistentStoreURL backupItemName:niloptions:NSFileManagerItemReplacementUsingNewMetadataOnlyresultingItemURL:nilerror:&thisError];     }          return [superconfigurePersistentStoreCoordinatorForURL:storeURL ofType:fileType modelConfiguration:configuration storeOptions:storeOptions error:error]; }   Basically what's happening above is that it checks for the minimalTime.sqlite file inside the app's bundle on the iOS device.  If the file exists, it tells you inside the console, and then tells the fileManager to replace the storeURL (inside the method parameter) with the legacy URL. This basically gives your app access to all the existing data the user has generated (otherwise they would load into a blank app, which would be disastrous). It returns a YES if successful (by calling it's [super] method). Final step: Actually load this database Due to how my app works, I actually have to load the database at launch (instead of shortly after, which would be ideal). I call a method called loadDatabase, which looks like this: -(void)loadDatabase {     static dispatch_once_t onceToken;          // Only do this once!     dispatch_once(&onceToken, ^{         // Get the URL         // The minimalTimeDB name is just something I call it         NSURL *url = [[selfapplicationDocumentsDirectory] URLByAppendingPathComponent:@"minimalTimeDB"];         // Init the TimeTagModel (our custom class we wrote above) with the URL         self.timeTagDB = [[TimeTagModel alloc] initWithFileURL:url];           // Setup the undo manager if it's nil         if (self.timeTagDB.undoManager == nil){             NSUndoManager *undoManager = [[NSUndoManager  alloc] init];             [self.timeTagDB setUndoManager:undoManager];         }                  // You have to actually check to see if it exists already (for some reason you can't just call "open it, and if it's not there, create it")         if ([[NSFileManagerdefaultManager] fileExistsAtPath:[url path]]) {             // If it does exist, try to open it, and if it doesn't open, let the user (or at least you) know!             [self.timeTagDB openWithCompletionHandler:^(BOOL success){                 if (!success) {                     // Handle the error.                     NSLog(@"Error opening up the database");                 }                 else{                     NSLog(@"Opened the file--it already existed");                     [self refreshData];                 }             }];         }         else {             // If it doesn't exist, you need to attempt to create it             [self.timeTagDBsaveToURL:url forSaveOperation:UIDocumentSaveForCreatingcompletionHandler:^(BOOL success){                 if (!success) {                     // Handle the error.                     NSLog(@"Error opening up the database");                 }                 else{                     NSLog(@"Created the file--it did not exist");                     [self refreshData];                 }             }];         }     }); }   If you're curious what refreshData looks like, it sends out a NSNotification that the database has been loaded: -(void)refreshData {     NSNotification* refreshNotification = [NSNotificationnotificationWithName:kNotificationCenterRefreshAllDatabaseData object:self.timeTagDB.managedObjectContext  userInfo:nil];     [[NSNotificationCenter defaultCenter] postNotification:refreshNotification];     }   The kNotificationCenterRefreshAllDatabaseData is just a constant I have defined elsewhere that keeps track of all the NSNotification names I use. I pass the managedObjectContext of the newly created file so that my view controllers can have access to it, and start passing it around to one another. The reason we do this as a Notification is because this is being run in the background, so we can't know exactly when it finishes. Make sure you design your app for this! Have some kind of loading indicator, or make sure your user can't attempt to create a record before the database actually exists, because it will crash the app.

    Read the article

  • yield – Just yet another sexy c# keyword?

    - by George Mamaladze
    yield (see NSDN c# reference) operator came I guess with .NET 2.0 and I my feeling is that it’s not as wide used as it could (or should) be.   I am not going to talk here about necessarity and advantages of using iterator pattern when accessing custom sequences (just google it).   Let’s look at it from the clean code point of view. Let's see if it really helps us to keep our code understandable, reusable and testable.   Let’s say we want to iterate a tree and do something with it’s nodes, for instance calculate a sum of their values. So the most elegant way would be to build a recursive method performing a classic depth traversal returning the sum.           private int CalculateTreeSum(Node top)         {             int sumOfChildNodes = 0;             foreach (Node childNode in top.ChildNodes)             {                 sumOfChildNodes += CalculateTreeSum(childNode);             }             return top.Value + sumOfChildNodes;         }     “Do One Thing” Nevertheless it violates one of the most important rules “Do One Thing”. Our  method CalculateTreeSum does two things at the same time. It travels inside the tree and performs some computation – in this case calculates sum. Doing two things in one method is definitely a bad thing because of several reasons: ·          Understandability: Readability / refactoring ·          Reuseability: when overriding - no chance to override computation without copying iteration code and vice versa. ·          Testability: you are not able to test computation without constructing the tree and you are not able to test correctness of tree iteration.   I want to spend some more words on this last issue. How do you test the method CalculateTreeSum when it contains two in one: computation & iteration? The only chance is to construct a test tree and assert the result of the method call, in our case the sum against our expectation. And if the test fails you do not know wether was the computation algorithm wrong or was that the iteration? At the end to top it all off I tell you: according to Murphy’s Law the iteration will have a bug as well as the calculation. Both bugs in a combination will cause the sum to be accidentally exactly the same you expect and the test will PASS. J   Ok let’s use yield! That’s why it is generally a very good idea not to mix but isolate “things”. Ok let’s use yield!           private int CalculateTreeSumClean(Node top)         {             IEnumerable<Node> treeNodes = GetTreeNodes(top);             return CalculateSum(treeNodes);         }             private int CalculateSum(IEnumerable<Node> nodes)         {             int sumOfNodes = 0;             foreach (Node node in nodes)             {                 sumOfNodes += node.Value;             }             return sumOfNodes;         }           private IEnumerable<Node> GetTreeNodes(Node top)         {             yield return top;             foreach (Node childNode in top.ChildNodes)             {                 foreach (Node currentNode in GetTreeNodes(childNode))                 {                     yield return currentNode;                 }             }         }   Two methods does not know anything about each other. One contains calculation logic another jut the iteration logic. You can relpace the tree iteration algorithm from depth traversal to breath trevaersal or use stack or visitor pattern instead of recursion. This will not influence your calculation logic. And vice versa you can relace the sum with product or do whatever you want with node values, the calculateion algorithm is not aware of beeng working on some tree or graph.  How about not using yield? Now let’s ask the question – what if we do not have yield operator? The brief look at the generated code gives us an answer. The compiler generates a 150 lines long class to implement the iteration logic.       [CompilerGenerated]     private sealed class <GetTreeNodes>d__0 : IEnumerable<Node>, IEnumerable, IEnumerator<Node>, IEnumerator, IDisposable     {         ...        150 Lines of generated code        ...     }   Often we compromise code readability, cleanness, testability, etc. – to reduce number of classes, code lines, keystrokes and mouse clicks. This is the human nature - we are lazy. Knowing and using such a sexy construct like yield, allows us to be lazy, write very few lines of code and at the same time stay clean and do one thing in a method. That's why I generally welcome using staff like that.   Note: The above used recursive depth traversal algorithm is possibly the compact one but not the best one from the performance and memory utilization point of view. It was taken to emphasize on other primary aspects of this post.

    Read the article

  • A Bite With No Teeth&ndash;Demystifying Non-Compete Clauses

    - by D'Arcy Lussier
    *DISCLAIMER: I am not a lawyer and this post in no way should be considered legal advice. I’m also in Canada, so references made are to Canadian court cases. I received a signed letter the other day, a reminder from my previous employer about some clauses associated with my employment and entry into an employee stock purchase program. So since this is in effect for the next 12 months, I guess I’m not starting that new job tomorrow. I’m kidding of course. How outrageous, how presumptuous, pompous, and arrogant that a company – any company – would actually place these conditions upon an employee. And yet, this is not uncommon. Especially in the IT industry, we see time and again similar wording in our employment agreements. But…are these legal? Is there any teeth behind the threat of the bite? Luckily, the answer seems to be ‘No’. I want to highlight two cases that support this. The first is Lyons v. Multari. In a nutshell, Dentist hires younger Dentist to be an associate. In their short, handwritten agreement, a non-compete clause was written stating “Protective Covenant. 3 yrs. – 5mi” (meaning you can’t set up shop within 5 miles for 3 years). Well, the young dentist left and did start an oral surgery office within 5 miles and within 3 years. Off to court they go! The initial judge sided with the older dentist, but on appeal it was overturned. Feel free to read the transcript of the decision here, but let me highlight one portion from section [19]: The general rule in most common law jurisdictions is that non-competition clauses in employment contracts are void. The sections following [19] explain further, and discuss Elsley v. J.G. Collins Insurance Agency Ltd. and its impact on Canadian law in this regard. The second case is Winnipeg Livestock Sales Ltd. v. Plewman. Desmond Plewman is an auctioneer, and worked at Winnipeg Livestock Sales. Part of his employment agreement was that he could not work for a competitor for 18 months if he left the company. Well, he left, and took up an important role in a competing company. The case went to court and as with Lyons v. Multari, the initial judge found in favour of the plaintiffs. Also as in the first case, that was overturned on appeal. Again, read through the transcript of the decision, but consider section [28]: In other words, even though Plewman has a great deal of skill as an auctioneer, Winnipeg Livestock has no proprietary interest in his professional skill and experience, even if they were acquired during his time working for Winnipeg Livestock.  Thus, Winnipeg Livestock has the burden of establishing that it has a legitimate proprietary interest requiring protection.  On this key question there is little evidence before the Court.  The record discloses that part of Plewman’s job was to “mingle with the … crowd” and to telephone customers and prospective customers about future prospects for the sale of livestock.  It may seem reasonable to assume that Winnipeg Livestock has a legitimate proprietary interest in its customer connections; but there is no evidence to indicate that there is any significant degree of “customer loyalty” in the business, as opposed to customers making choices based on other considerations such as cost, availability and the like. So are there any incidents where a non-compete can actually be valid? Yes, and these are considered “exceptional” cases, meaning that the situation meets certain circumstances. Michael Carabash has a great blog series discussing the above mentioned cases as well as the difference between a non-compete and non-solicit agreement. He talks about the exceptional criteria: In summary, the authorities reveal that the following circumstances will generally be relevant in determining whether a case is an “exceptional” one so that a general non-competition clause will be found to be reasonable: - The length of service with the employer. - The amount of personal service to clients. - Whether the employee dealt with clients exclusively, or on a sustained or     recurring basis. - Whether the knowledge about the client which the employee gained was of a   confidential nature, or involved an intimate knowledge of the client’s   particular needs, preferences or idiosyncrasies. - Whether the nature of the employee’s work meant that the employee had   influence over clients in the sense that the clients relied upon the employee’s   advice, or trusted the employee. - If competition by the employee has already occurred, whether there is   evidence that clients have switched their custom to him, especially without   direct solicitation. - The nature of the business with respect to whether personal knowledge of   the clients’ confidential matters is required. - The nature of the business with respect to the strength of customer loyalty,   how clients are “won” and kept, and whether the clientele is a recurring one. - The community involved and whether there were clientele yet to be exploited   by anyone. I close this blog post with a final quote, one from Zvulony & Co’s blog post on this subject. Again, all of this is not official legal advice, but I think we can see what all these sources are pointing towards. To answer my earlier question, there’s no teeth behind the threat of the bite. In light of this list, and the decisions in Lyons and Orlan, it is reasonably certain that in most employment situations a non-competition clause will be ineffective in protecting an employer from a departing employee who wishes to compete in the same business. The Courts have been relatively consistent in their position that if a non-solicitation clause can protect an employer’s interests, then a non-competition clause is probably unreasonable. Employers (or their solicitors) should avoid the inclination to draft restrictive covenants in broad, catch-all language. Or in other words, when drafting a restrictive covenant – take only what you need! D

    Read the article

< Previous Page | 593 594 595 596 597 598 599 600 601 602 603 604  | Next Page >