Search Results

Search found 5998 results on 240 pages for 'rise against'.

Page 171/240 | < Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >

  • Windows Azure Mobile Services Updates Keep Coming

    - by Clint Edmonson
    Some exciting new Windows Azure Mobile Services features were delivered to production this week. The highlights include: iPhone and iPad connectivity support via a new iOS SDK Integrated Authentication so developers can configure user authentication via Microsoft Account, Facebook, Twitter, and Google. New server-side Mobile Service script modules Access to Structured Storage, Windows Azure Blob, Table, Queues, and ServiceBus Email services through partnership with SendGrid SMS & voice services through partnership with Twilio Mobile Services hosting expanded to west coast US The iOS SDK I’m excited to share that we've announced the release of an under-development iOS client SDK for Windows Azure Mobile Services. The iOS SDK joins the Windows 8 SDK launched with Windows Azure Mobile Services as well as client SDKs released by Xamarin for MonoTouch and MonoDroid.  The native iOS SDK is for developers programming in Objective-C on the iPhone and iPad platforms. The SDK gives developers the same level of access to data storage using dynamic schematization that is available for Windows 8. Also, iOS applications can use the same authentication options available in Mobile Services. While full iOS support is still in development, the libraries are currently available on GitHub. There’s a great getting started tutorial to walk you through building a simple iOS “Todo List” app that stores data in Windows Azure.  These additional tutorials explore how to use the iOS client libraries to store data and authenticate users: Get Started with data in Mobile Services for iOS Get Started with authentication in Mobile Services for iOS What’s New in Authentication Available to both iOS and Windows 8 developers, Mobile Services has expanded its authentication options.  Developers can now use Microsoft, Facebook, Twitter, and Google authentication. Similar to using Microsoft accounts for authentication, developers must sign up and through Facebook, Twitter, or Google's developer portal in order to authenticate through them.  These tutorials walk through how to register your Mobile Service with an identity provider: How to register your app with Microsoft Account How to register your app with Facebook How to register your app with Twitter How to register your app with Google And these tutorials walk through authenticating against Mobile Services: Get started with authentication in Mobile Services for Windows Store (C#) Get started with authentication in Mobile Services for Windows Store (JavaScript) Get started with authentication in Mobile Services for iOS What’s New in Mobile Service Scripts Some great new functionality is now available in the Mobile Service script layer.  These server side scripts are triggered off of any CRUD operation on a Mobile Service's table and can already handle doing data and query validation, filtering, web requests and more.  Today, the Azure SDK module is now available to these scripts giving them access to blob storage, service bus, table storage.  Check out the new tutorials on the Windows Azure Node.js developer center to learn more about working with Blob, Tables, Queues and Service Bus using the azure module. In addition, SendGrid and Twilio are now available via modules that can be called from the scripts as well.  This gives developers the ability to send emails (SendGrid) or SMS text messages (Twilio) whenever a script is fired.  Windows Azure customers receive a special offer of 25,000 free emails per month from SendGrid and 1000 free text messages from Twilio. Expanded Data Center Availability In addition to Mobile Services being available in our US East data center, they can now be spun up in US West. The above features are all now live in production and are available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using Mobile Services today. The Windows Azure Mobile Developer Center has been updated with new tutorials that cover these new features in detail. And don’t forget - Windows Azure Mobile Services are still free for your first ten applications running on shared compute instances. Stay tuned to my twitter feed for Windows Azure announcements, updates, and links: @clinted

    Read the article

  • Get to Know a Candidate (16 of 25): Stewart Alexander&ndash;Socialist Party USA

    - by Brian Lanham
    DISCLAIMER: This is not a post about “Romney” or “Obama”. This is not a post for whom I am voting. Information sourced for Wikipedia. Alexander is an American democratic socialist politician and a resident of California. Alexander was the Peace and Freedom Party candidate for Lieutenant Governor in 2006. He received 43,319 votes, 0.5% of the total. In August 2010, Alexander declared his candidacy for the President of the United States with the Socialist Party and Green Party. In January 2011, Alexander also declared his candidacy for the presidential nomination of the Peace and Freedom Party. Stewart Alexis Alexander was born to Stewart Alexander, a brick mason and minister, and Ann E. McClenney, a nurse and housewife.  While in the Air Force Reserve, Alexander worked as a full-time retail clerk at Safeway Stores and then began attending college at California State University, Dominguez Hills. Stewart began working overtime as a stocking clerk with Safeway to support himself through school. During this period he married to Freda Alexander, his first wife. They had one son. He was honorably discharged in October 1976 and married for the second time. He left Safeway in 1978 and for a brief period worked as a licensed general contractor. In 1980, he went to work for Lockheed Aircraft but quit the following year.  Returning to Los Angeles, he became involved in several civic organizations, including most notably the NAACP (he became the Labor and Industry Chairman for the Inglewood South Bay Branch of the NAACP). In 1986 he moved back to Los Angeles and hosted a weekly talk show on KTYM Radio until 1989. The show dealt with social issues affecting Los Angeles such as gangs, drugs, and redevelopment, interviewing government officials from all levels of government and community leaders throughout California. He also worked with Delores Daniels of the NAACP on the radio and in the street. The Socialist Party USA (SPUSA) is a multi-tendency democratic-socialist party in the United States. The party states that it is the rightful continuation and successor to the tradition of the Socialist Party of America, which had lasted from 1901 to 1972. The party is officially committed to left-wing democratic socialism. The Socialist Party USA, along with its predecessors, has received varying degrees of support, when its candidates have competed against those from the Republican and Democratic parties. Some attribute this to the party having to compete with the financial dominance of the two major parties, as well as the limitations of the United States' legislatively and judicially entrenched two-party system. The Party supports third-party candidates, particularly socialists, and opposes the candidates of the two major parties. Opposing both capitalism and "authoritarian Communism", the Party advocates bringing big business under public ownership and democratic workers' self-management. The party opposes unaccountable bureaucratic control of Soviet communism. Alexander has Ballot Access in: CO, FL, NY, OH (write-in access in: IN, TX) Learn more about Stewart Alexander and Socialist Party USA on Wikipedia.

    Read the article

  • Tuning Red Gate: #3 of Lots

    - by Grant Fritchey
    I'm drilling down into the metrics about SQL Server itself available to me in the Analysis tab of SQL Monitor to see what's up with our two problematic servers. In the previous post I'd noticed that rg-sql01 had quite a few CPU spikes. So one of the first things I want to check there is how much CPU is getting used by SQL Server itself. It's possible we're looking at some other process using up all the CPU Nope, It's SQL Server. I compared this to the rg-sql02 server: You can see that there is a more, consistently low set of CPU counters there. I clearly need to look at rg-sql01 and capture more specific data around the queries running on it to identify which ones are causing these CPU spikes. I always like to look at the Batch Requests/sec on a server, not because it's an indication of a problem, but because it gives you some idea of the load. Just how much is this server getting hit? Here are rg-sql01 and rg-sql02: Of the two, clearly rg-sql01 has a lot of activity. Remember though, that's all this is a measure of, activity. It doesn't suggest anything other than what it says, the number of requests coming in. But it's the kind of thing you want to know in order to understand how the system is used. Are you seeing a correlation between the number of requests and the CPU usage, or a reverse correlation, the number of requests drops as the CPU spikes? See, it's useful. Some of the details you can look at are Compilations/sec, Compilations/Batch and Recompilations/sec. These give you some idea of how the cache is getting used within the system. None of these showed anything interesting on either server. One metric that I like (even though I know it can be controversial) is the Page Life Expectancy. On the average server I expect see a series of mountains as the PLE climbs then drops due to a data load or something along those lines. That's not the case here: Those spikes back in January suggest that the servers weren't really being used much. The PLE on the rg-sql01 seems to be somewhat consistent growing to 3 hours or so then dropping, but the rg-sql02 PLE looks like it might be all over the map. Instead of continuing to look at this high level gathering data view, I'm going to drill down on rg-sql02 and see what it's done for the last week: And now we begin to see where we might have an issue. Memory on this system is getting flushed every 1/2 hour or so. I'm going to check another metric, scans: Whoa! I'm going back to the system real quick to look at some disk information again for rg-sql02. Here is the average disk queue length on the server: and the transfers Right, I think I have a guess as to what's up here. We're seeing memory get flushed constantly and we're seeing lots of scans. The disks are queuing, especially that F drive, and there are lots of requests that correspond to the scans and the memory flushes. In short, we've got queries that are scanning the data, a lot, so we either have bad queries or bad indexes. I'm going back to the server overview for rg-sql02 and check the Top 10 expensive queries. I'm modifying it to show me the last 3 days and the totals, so I'm not looking at some maintenance routine that ran 10 minutes ago and is skewing the results: OK. I need to look into these queries that are getting executed this much. They're generating a lot of reads, but which queries are generating the most reads: Ow, all still going against the same database. This is where I'm going to temporarily leave SQL Monitor. What I want to do is connect up to the server, validate that the Warehouse database is using the F:\ drive (which I'll put money down it is) and then start seeing what's up with these queries. Part 1 of the Series Part 2 of the Series

    Read the article

  • Windows Azure Use Case: New Development

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx Description: Computing platforms evolve over time. Originally computers were directed by hardware wiring - that, the “code” was the path of the wiring that directed an electrical signal from one component to another, or in some cases a physical switch controlled the path. From there software was developed, first in a very low machine language, then when compilers were created, computer languages could more closely mimic written statements. These language statements can be compiled into the lower-level machine language still used by computers today. Microprocessors replaced logic circuits, sometimes with fewer instructions (Reduced Instruction Set Computing, RISC) and sometimes with more instructions (Complex Instruction Set Computing, CISC). The reason this history is important is that along each technology advancement, computer code has adapted. Writing software for a RISC architecture is significantly different than developing for a CISC architecture. And moving to a Distributed Architecture like Windows Azure also has specific implementation details that our code must follow. But why make a change? As I’ve described, we need to make the change to our code to follow advances in technology. There’s no point in change for its own sake, but as a new paradigm offers benefits to our users, it’s important for us to leverage those benefits where it makes sense. That’s most often done in new development projects. It’s a far simpler task to take a new project and adapt it to Windows Azure than to try and retrofit older code designed in a previous computing environment. We can still use the same coding languages (.NET, Java, C++) to write code for Windows Azure, but we need to think about the architecture of that code on a new project so that it runs in the most efficient, cost-effective way in a Distributed Architecture. As we receive new requests from the organization for new projects, a distributed architecture paradigm belongs in the decision matrix for the platform target. Implementation: When you are designing new applications for Windows Azure (or any distributed architecture) there are many important details to consider. But at the risk of over-simplification, there are three main concepts to learn and architect within the new code: Stateless Programming - Stateless program is a prime concept within distributed architectures. Rather than each server owning the complete processing cycle, the information from an operation that needs to be retained (the “state”) should be persisted to another location c(like storage) common to all machines involved in the process.  An interesting learning process for Stateless Programming (although not unique to this language type) is to learn Functional Programming. Server-Side Processing - Along with developing using a Stateless Design, the closer you can locate the code processing to the data, the less expensive and faster the code will run. When you control the network layer, this is less important, since you can send vast amounts of data between the server and client, allowing the client to perform processing. In a distributed architecture, you don’t always own the network, so it’s performance is unpredictable. Also, you may not be able to control the platform the user is on (such as a smartphone, PC or tablet), so it’s imperative to deliver only results and graphical elements where possible.  Token-Based Authentication - Also called “Claims-Based Authorization”, this code practice means instead of allowing a user to log on once and then running code in that context, a more granular level of security is used. A “token” or “claim”, often represented as a Certificate, is sent along for a series or even one request. In other words, every call to the code is authenticated against the token, rather than allowing a user free reign within the code call. While this is more work initially, it can bring a greater level of security, and it is far more resilient to disconnections. Resources: See the references of “Nondistributed Deployment” and “Distributed Deployment” at the top of this article for more information with graphics:  http://msdn.microsoft.com/en-us/library/ee658120.aspx  Stack Overflow has a good thread on functional programming: http://stackoverflow.com/questions/844536/advantages-of-stateless-programming  Another good discussion on Stack Overflow on server-side processing is here: http://stackoverflow.com/questions/3064018/client-side-or-server-side-processing Claims Based Authorization is described here: http://msdn.microsoft.com/en-us/magazine/ee335707.aspx

    Read the article

  • Iron Speed Designer 7.0 - the great gets greater!

    - by GGBlogger
    For Immediate Release Iron Speed, Inc. Kelly Fisher +1 (408) 228-3436 [email protected] http://www.ironspeed.com       Iron Speed Version 7.0 Generates SharePoint Applications New! Support for Microsoft SharePoint speeds application generation and deployment   San Jose, CA – June 8, 2010. Software development tools-maker Iron Speed, Inc. released Iron Speed Designer Version 7.0, the latest version of its popular Web 2.0 application generator. Iron Speed Designer generates rich, interactive database and reporting applications for .NET, Microsoft SharePoint and the Cloud.    In addition to .NET applications, Iron Speed Designer V7.0 generates database-driven SharePoint applications. The ability to quickly create database-driven applications for SharePoint eliminates a lot of work, helping IT departments generate productivity-enhancing applications in just a few hours.  Generated applications include integrated SharePoint application security and use SharePoint master pages.    “It’s virtually impossible to build database-driven application in SharePoint by hand. Iron Speed Designer V7.0 not only makes this possible, the tool makes it easy.” – Razi Mohiuddin, President, Iron Speed, Inc.     Integrated SharePoint application security Generated applications include integrated SharePoint application security. SharePoint sites and their groups are used to retrieve security roles. Iron Speed Designer validates the user against a Microsoft SharePoint server on your network by retrieving the logged in user’s credentials from the SharePoint Context.    “The Iron Speed Designer generated application integrates seamlessly with SharePoint security, removing the hassle of designing, testing and approving your own security layer.” -Michael Landi, Solutions Architect, Light Speed Solutions     SharePoint Solution Packages Iron Speed Designer V7.0 creates SharePoint Solution Packages (WSPs) for easy application deployment. Using the Deployment Wizard, a single application WSP is created and can be deployed to your SharePoint server.   “Iron Speed Designer is the first product on the market that allows easy and painless deployment of database-driven .NET web applications inside the SharePoint environment.” -Bryan Patrick, Developer, Pseudo Consulting     SharePoint master pages and themes In V7.0, generated applications use SharePoint master pages and contain the same content as other SharePoint pages. Generated applications use the current SharePoint color scheme and display standard SharePoint navigation controls on each page.   “Iron Speed Designer preserves the look and feel of the SharePoint environment in deployed database applications without additional hand-coding.” -Kirill Dmitriev, Software Developer, Iron Speed, Inc.     Iron Speed Designer Version 7.0 System Requirements Iron Speed Designer Version 7.0 runs on Microsoft Windows 7, Windows Vista, Windows XP, and Windows Server 2003 and 2008. It generates .NET Web applications for Microsoft SQL Server, Oracle, Microsoft Access and MySQL. These applications may be deployed on any machine running the .NET Framework. Iron Speed Designer supports Microsoft SharePoint 2007 and Windows SharePoint Services (WSS3). Find complete information about Iron Speed Designer Version 7.0 at www.ironspeed.com.     About Iron Speed, Inc. Iron Speed is the leader in enterprise-class application generation. Our software development tools generate database and reporting applications in significantly less time and cost than hand-coding. Our flagship product, Iron Speed Designer, is the fastest way to deliver applications for the Microsoft .NET and software-as-a-service cloud computing environments.   With products built on decades of experience in enterprise application development and large-scale e-commerce systems, Iron Speed products eliminate the need for developers to choose between "full featured" and "on schedule."   Founded in 1999, Iron Speed is well funded with a capital base of over $20M and strategic investors that include Arrow Electronics and Avnet, as well as executives from AMD, Excelan, Onsale, and Oracle. The company is based in San Jose, Calif., and is located online at www.ironspeed.com.

    Read the article

  • Patching and PCI Compliance

    - by Joel Weise
    One of my friends and master of the security universe, Darren Moffat, pointed me to Dan Anderson's blog the other day.  Dan went to Toorcon which is a security conference where he went to a talk on security patching titled, "Stop Patching, for Stronger PCI Compliance".  I realize that often times speakers will use a headline grabbing title to create interest in their talk and this one certainly got my attention.  I did not go to the conference and did not see the presentation, so I can only go by what is in the Toorcon agenda summary and on Dan's blog, but the general statement to stop patching for stronger PCI compliance seems a bit misleading to me.  Clearly patching is important to all systems management and should be a part of any organization's security hygiene.  Further, PCI does require the patching of systems to maintain compliance.  So it's important to mention that organizations should not simply stop patching their systems; and I want to believe that was not the speakers intent. So let's look at PCI requirement 6: "Unscrupulous individuals use security vulnerabilities to gain privileged access to systems. Many of these vulnerabilities are fixed by vendor- provided security patches, which must be installed by the entities that manage the systems. All critical systems must have the most recently released, appropriate software patches to protect against exploitation and compromise of cardholder data by malicious individuals and malicious software." Notice the word "appropriate" in the requirement.  This is stated to give organizations some latitude and apply patches that make sense in their environment and that target the vulnerabilities in question.  Haven't we all seen a vulnerability scanner throw a false positive and flag some module and point to a recommended patch, only to realize that the module doesn't exist on our system?  Applying such a patch would obviously not be appropriate.  This does not mean an organization can ignore the fact they need to apply security patches.  It's pretty clear they must.  Of course, organizations have other options in terms of compliance when it comes to patching.  For example, they could remove a system from scope and make sure that system does not process or contain cardholder data.  [This may or may not be a significant undertaking.  I just wanted to point out that there are always options available.] PCI DSS requirement 6.1 also includes the following note: "Note: An organization may consider applying a risk-based approach to prioritize their patch installations. For example, by prioritizing critical infrastructure (for example, public-facing devices and systems, databases) higher than less-critical internal devices, to ensure high-priority systems and devices are addressed within one month, and addressing less critical devices and systems within three months." Notice there is no mention to stop patching one's systems.  And the note also states organization may apply a risk based approach. [A smart approach but also not mandated].  Such a risk based approach is not intended to remove the requirement to patch one's systems.  It is meant, as stated, to allow one to prioritize their patch installations.   So what does this mean to an organization that must comply with PCI DSS and maintain some sanity around their patch management and overall operational readiness?  I for one like to think that most organizations take a common sense and balanced approach to their business and security posture.  If patching is becoming an unbearable task, review why that is the case and possibly look for means to improve operational efficiencies; but also recognize that security is important to maintaining the availability and integrity of one's systems.  Likewise, whether we like it or not, the cyber-world we live in is getting more complex and threatening - and I dont think it's going to get better any time soon.

    Read the article

  • Brainless Backups

    - by Jesse
    I’m a software developer by trade which means to my friends and family I’m just a “computer guy”. It’s assumed that I know everything about every facet of computing from removing spyware to replacing hardware. I also can do all of this blindly over the phone or after hearing a five to ten word description of the problem over dinner ;-) In my position as CIO of my friends and families I’ve been in the unfortunate position of trying to recover music, pictures, or documents off of failed hard drives on more than one occasion. It’s not a great situation for anyone, and it’s always at these times that the importance of backups becomes so clear. Several months back a friend of mine found himself in this situation. The hard drive on his 8 year old laptop failed and took a good number of his digital photos with it. I think most folks can deal with losing some of their music and even some of their documents, but it really stings to lose pictures of past events and loved ones. After ordering a new laptop, my friend went out and bought an external hard drive so that he could start keeping a backup of his data. As fate would have it, several months later the drive in his new laptop failed and he learned the hard way that simply buying the external hard drive isn’t enough… you actually have to copy your stuff over every once in awhile! The importance of backup and recovery plans is (hopefully) well known in IT organizations. Well executed backup plans are in place, and hopefully the backup and recovery process is tested regularly. When you’re talking about users at home, however, the need for these backups is often understood far too late. Most typical users can’t be expected to remember to backup their data regularly and also don’t always have the know-how to setup automated backups. For my friends and family members in this situation I recommend tools like Dropbox, Carbonite, and Mozy. Here’s why I like them: They’re affordable: Dropbox and Mozy both have free offerings, though most people with lots of music and/or photos to backup will probably exceed the storage limitations of those free plans pretty quickly. Still, all three offer pretty affordable monthly or yearly plans. In my opinion, Carbonite’s unlimited storage plan for $50-$60 per year is the best value around. They’re easy to setup: Both Dropbox and Carbonite are very easy to get setup and start using. I’ve never used Mozy, but I imagine it’s similarly painless to get up and running. Backups are automatically “off-site”: A backup that is sitting on an external hard drive right next to your computer is great, but might not protect against flood damage, a power surge, or other disasters in that single location. These services exist “in the cloud” so to speak, helping mitigate those concerns. Granted, this kind of backup scheme requires some trust in the 3rd party to protect your data from both malicious people and disastrous events. This truly is a bit of a double edged sword, but I sleep well at night knowing that my data is being backed up and secured by a company made up of engineers that focus on the business of doing backups right. Backups are “brainless”: What I like most about services like these is that they work “automagically” in the background, watching for files to be updated and automatically backing up those changes. There’s no need to remember to plug in that external drive and copy your data over. Since starting to recommend these services to my friends and family I find myself wearing my “data recovery” hat far less often. The only way backups are effective for your standard computer user is if they’re completely automatic. Backups need to be brainless, or they just won’t work.

    Read the article

  • Windows Azure Use Case: New Development

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx Description: Computing platforms evolve over time. Originally computers were directed by hardware wiring - that, the “code” was the path of the wiring that directed an electrical signal from one component to another, or in some cases a physical switch controlled the path. From there software was developed, first in a very low machine language, then when compilers were created, computer languages could more closely mimic written statements. These language statements can be compiled into the lower-level machine language still used by computers today. Microprocessors replaced logic circuits, sometimes with fewer instructions (Reduced Instruction Set Computing, RISC) and sometimes with more instructions (Complex Instruction Set Computing, CISC). The reason this history is important is that along each technology advancement, computer code has adapted. Writing software for a RISC architecture is significantly different than developing for a CISC architecture. And moving to a Distributed Architecture like Windows Azure also has specific implementation details that our code must follow. But why make a change? As I’ve described, we need to make the change to our code to follow advances in technology. There’s no point in change for its own sake, but as a new paradigm offers benefits to our users, it’s important for us to leverage those benefits where it makes sense. That’s most often done in new development projects. It’s a far simpler task to take a new project and adapt it to Windows Azure than to try and retrofit older code designed in a previous computing environment. We can still use the same coding languages (.NET, Java, C++) to write code for Windows Azure, but we need to think about the architecture of that code on a new project so that it runs in the most efficient, cost-effective way in a Distributed Architecture. As we receive new requests from the organization for new projects, a distributed architecture paradigm belongs in the decision matrix for the platform target. Implementation: When you are designing new applications for Windows Azure (or any distributed architecture) there are many important details to consider. But at the risk of over-simplification, there are three main concepts to learn and architect within the new code: Stateless Programming - Stateless program is a prime concept within distributed architectures. Rather than each server owning the complete processing cycle, the information from an operation that needs to be retained (the “state”) should be persisted to another location c(like storage) common to all machines involved in the process.  An interesting learning process for Stateless Programming (although not unique to this language type) is to learn Functional Programming. Server-Side Processing - Along with developing using a Stateless Design, the closer you can locate the code processing to the data, the less expensive and faster the code will run. When you control the network layer, this is less important, since you can send vast amounts of data between the server and client, allowing the client to perform processing. In a distributed architecture, you don’t always own the network, so it’s performance is unpredictable. Also, you may not be able to control the platform the user is on (such as a smartphone, PC or tablet), so it’s imperative to deliver only results and graphical elements where possible.  Token-Based Authentication - Also called “Claims-Based Authorization”, this code practice means instead of allowing a user to log on once and then running code in that context, a more granular level of security is used. A “token” or “claim”, often represented as a Certificate, is sent along for a series or even one request. In other words, every call to the code is authenticated against the token, rather than allowing a user free reign within the code call. While this is more work initially, it can bring a greater level of security, and it is far more resilient to disconnections. Resources: See the references of “Nondistributed Deployment” and “Distributed Deployment” at the top of this article for more information with graphics:  http://msdn.microsoft.com/en-us/library/ee658120.aspx  Stack Overflow has a good thread on functional programming: http://stackoverflow.com/questions/844536/advantages-of-stateless-programming  Another good discussion on Stack Overflow on server-side processing is here: http://stackoverflow.com/questions/3064018/client-side-or-server-side-processing Claims Based Authorization is described here: http://msdn.microsoft.com/en-us/magazine/ee335707.aspx

    Read the article

  • Measuring Code Quality

    - by DotNetBlues
    Several months back, I was tasked with measuring the quality of code in my organization. Foolishly, I said, "No problem." I figured that Visual Studio has a built-in code metrics tool (Analyze -> Calculate Code Metrics) and that would be a fine place to start with. I was right, but also very wrong. The Visual Studio calculates five primary metrics: Maintainability Index, Cyclomatic Complexity, Depth of Inheritance, Class Coupling, and Lines of Code. The first two are figured at the method level, the second at (primarily) the class level, and the last is a simple count. The first question any reasonable person should ask is "Which one do I look at first?" The first question any manager is going to ask is, "What one number tells me about the whole application?" My answer to both, in a way, was "Maintainability Index." Why? Because each of the other numbers represent one element of quality while MI is a composite number that includes Cyclomatic Complexity. I'd be lying if I said no consideration was given to the fact that it was abstract enough that it's harder for some surly developer (I've been known to resemble that remark) to start arguing why a high coupling or inheritance is no big deal or how complex requirements are to blame for complex code. I should also note that I don't think there is one magic bullet metric that will tell you objectively how good a code base is. There are a ton of different metrics out there, and each one was created for a specific purpose in mind and has a pet theory behind it. When you've got a group of developers who aren't accustomed to measuring code quality, picking a 0-100 scale, non-controversial metric that can be easily generated by tools you already own really isn't a bad place to start. That sort of answers the question a developer would ask, but what about the management question; how do you dashboard this stuff when Visual Studio doesn't roll up the numbers to the solution level? Since VS does roll up the MI to the project level, I thought I could just figure out what sort of weighting Microsoft used to roll method scores up to the class level and then to the namespace and project levels. I was a bit surprised by the answer: there is no weighting. That means that a class with one 1300 line method (which will score a 0 MI) and one empty constructor (which will score a 100 MI) will have an overall MI of a respectable 50. Throw in a couple of DTOs that are nothing more than getters and setters (which tend to score 95 or better) and the project ends up looking really, really healthy. The next poor bastard who has to work on the application is probably not going to be singing the praises of its maintainability, though. For the record, that 1300 line method isn't a hypothetical, either. So, what does one do with that? Well, I decided to weight the average by the Lines of Code per method. For our above example, the formula for the class's MI becomes ((1300 * 0) + (1 * 100))/1301 = .077, rounded to 0. Sounds about right. Continue the pattern for namespace, project, solution, and even multi-solution application MI scores. This can be done relatively easily by using the "export to Excel" button and running a quick formula against the data. On the short list of follow-up questions would be, "How do I improve my application's score?" That's an answer for another time, though.

    Read the article

  • Finding the maximum value/date across columns

    - by AtulThakor
    While working on some code recently I discovered a neat little trick to find the maximum value across several columns….. So the starting point was finding the maximum date across several related tables and storing the maximum value against an aggregated record. Here's the sample setup code: USE TEMPDB IF OBJECT_ID('CUSTOMER') IS NOT NULL BEGIN DROP TABLE CUSTOMER END IF OBJECT_ID('ADDRESS') IS NOT NULL BEGIN DROP TABLE ADDRESS END IF OBJECT_ID('ORDERS') IS NOT NULL BEGIN DROP TABLE ORDERS END SELECT 1 AS CUSTOMERID, 'FREDDY KRUEGER' AS NAME, GETDATE() - 10 AS DATEUPDATED INTO CUSTOMER SELECT 100000 AS ADDRESSID, 1 AS CUSTOMERID, '1428 ELM STREET' AS ADDRESS, GETDATE() -5 AS DATEUPDATED INTO ADDRESS SELECT 123456 AS ORDERID, 1 AS CUSTOMERID, GETDATE() + 1 AS DATEUPDATED INTO ORDERS .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Now the code used a function to determine the maximum date, this performed poorly. After considering pivoting the data I opted for a case statement, this seemed reasonable until I discovered other areas which needed to determine the maximum date between 5 or more tables which didn't scale well. The final solution involved using the value clause within a sub query as followed. SELECT C.CUSTOMERID, A.ADDRESSID, (SELECT MAX(DT) FROM (Values(C.DATEUPDATED),(A.DATEUPDATED),(O.DATEUPDATED)) AS VALUE(DT)) FROM CUSTOMER C INNER JOIN ADDRESS A ON C.CUSTOMERID = A.CUSTOMERID INNER JOIN ORDERS O ON O.CUSTOMERID = C.CUSTOMERID .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } As you can see the solution scales well and can take advantage of many of the aggregate functions!

    Read the article

  • AppKata - Enter the next level of programming exercises

    - by Ralf Westphal
    Doing CodeKatas is all the rage lately. That´s great since widely accepted exercises are important to further the art. They provide a means of communication across platforms and allow to compare results which is part of any deliberate practice. But CodeKatas suffer from their size. They are intentionally small, so they can be done again and again. Repetition helps to build habit and to dig deeper. Over time ever new nuances of the problem or one´s approach become visible. On the other hand, though, their small size limits the methods, techniques, technologies that can be applied. To improve your TDD skills doing CodeKatas might be enough. But what about other skills? Developing on a software in a team, designing larger pieces of software, iteratively releasing software… all this and more is kinda hard to train using the tiny CodeKata problems. That´s why I´d like to present here another kind of kata I call Application Kata (or just AppKata). AppKatas are larger programming problems. They require the development of “whole” applications, i.e. not just one class or method, but bunches of classes accessible through a user interface. Also AppKata problems always are split into iterations. To get the most out of them, just look at the requirements of one iteration at a time. This way you´re closer to reality where requirements evolve in unexpected ways. So if you´re looking for more of a challenge for your software development skills, check out these AppKatas – or invent your own. AppKatas are platform independent like CodeKatas. Use whatever programming language and IDE you like. Also use whatever approach to software development you like. Just be sensitive to how easy it is to evolve your code across iterations. Reflect on what went well and what not. Compare your solutions with others. Or – for even more challenge – go for the “Coding Carousel” (see below). CSV Viewer An application to view CSV files. Sounds easy, but watch out! Requirements sometimes drastically change if the customer is happy with what you delivered. Iteration 1 Iteration 2 Iteration 3 Iteration 4 Iteration 5 (to come) Questionnaire If you like GUI programming, this AppKata might be for you. It´s about an app to let people fill out questionnaires. Also this problem might be interestin for you, if you´re into DDD. Iteration 1 Iteration 2 (to come) Iteration 3 (to come) Iteration 4 (to come) Tic Tac Toe For developers who like game programming. Although Tic Tac Toe is a trivial game, this AppKata poses some interesting infrastructure challenges. The GUI, however, stays simple; leave any 3D ambitions at home ;-) Iteration 1 Iteration 2 (to come) Iteration 3 (to come) Iteration 4 (to come) Iteration 5 (to come) Coding Carousel There are many ways you can do AppKatas. Work on them alone or in a team, pitch several devs against each other in an AppKata contest – or go around in a Coding Carousel. For the Coding Carousel you need at least 3 dev teams (regardless of size). All teams work on the same iteration at the same time. But here´s the trick: After each iteration the teams swap their code. Whatever they did for iteration n will be the basis for changes another team has to apply in iteration n+1. The code is going around the teams like in a carousel. I promise you, that´s gonna be fun! :-)

    Read the article

  • Oracle Employees Support New World Record for IYF Children's Hour

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 960 students ‘crouched’, ‘touched’ and ‘set’ under the watchful eye of International Rugby Referee Alain Roland, and supported by Oracle employees, to successfully set a new world record for the World’s Largest Scrum to raise funds and awareness for the Irish Youth Foundation. Last year Oracle Employees supported the Irish Youth Foundation by donating funds from their payroll through the Giving Tree Appeal. We were the largest corporate donor to the IYF by raising €3075. To acknowledge our generosity the IYF asked Oracle Leadership in Society team members to participate in their most recent campaign which was to break the Guinness Book of Records by forming the World’s Largest Rugby Scrum. This was a wonderful opportunity for Oracle’s Leadership in Society to promote the charity, support education and to make a mark in the Corporate Social Responsibility field. The students who formed the scrum also gave up their lunch money and raised a total of €3000. This year we hope Oracle Employees will once again support the IYF with the challenge to match that amount. On the 24th of October the sun shone down on the streaming lines of students entering the field. 480 students were decked out in bright red Oracle T-Shirts against the other 480 in blue and white jerseys - all ready to form a striking scrum. Ryan Tubridy the host of the event made the opening announcement and with the blow of a whistle the Scum began. 960 students locked tight together with the Leinster players also at each side. Leinster Manager Matt O’Connor was there along with presenters Ryan Tubridy and George Hook to assist with getting the boys in line and keeping the shape of the scrum. In accordance with Guinness Book of Records rules, the ball was fed into the scrum properly by Ireland and Leinster scrum-half, Eoin Reddan, and was then passed out the line to his Leinster team mates including Ian Madigan, Brendan Macken and Jordi Murphy, also proudly sporting the Oracle T-Shirt. The new World Record was made, everyone gave a big cheer and thankfully nobody got injured! Thank you to everyone in Oracle who donated last year through the Giving Tree Appeal. Your generosity has gone a long way to support local groups both. Last year’s donation was so substantial that the IYF were able to spread it across two youth groups: The first being Ballybough Youth Project in Dublin. The funding gave them the chance to give 24 young people from their project the chance to get away from the inner city and the problems and issues they face in their daily life by taking a trip to the Cavan Centre to spend a weekend away in a safe and comfortable environment; a very rare holiday in these young people’s lives. The Rahoon Family Centre. Used the money to help secure the long term sustainability of their project. They act as an educational/social/fun project that has been working with disadvantaged children for the past 16 years. Their aim is to change young people’s future with fun /social education and supporting them so they can maximize their creativity and potential. We hope you can help support this worthy cause again this year, so keep an eye out for the Children’s Hour and Giving Tree Appeal! About the Irish Youth Foundation The IYF provides opportunities for marginalised children and young people facing difficult and extreme conditions to experience success in their lives. It passionately believes that achievement starts with opportunity. The IYF’s strategy is based on providing safe places where children can go after school; to grow, to learn and to play; and providing opportunities for teenagers from under-served communities to succeed and excel in their lives. The IYF supports innovative grassroots projects operated by dedicated professionals who understand young people and care about them. This allows the IYF to focus on supporting young people at risk of dropping out of school and, in particular, on the critical transition from primary to secondary school; and empowering teenagers from disadvantaged neighborhoods to become engaged in their local communities. Find out more here www.iyf.ie

    Read the article

  • ArchBeat Top 10 for December 2-8, 2012

    - by Bob Rhubart
    The Top 10 most-clicked items shared on the OTN ArchBeat Facebook page for the week of December 2-8, 2012 Configure Oracle SOA JMSAdatper to Work with WLS JMS Topics Another of the four posts published on Dec 4 by the Fusion Middleware A-Team blogger identified as "fip" illlustrates "how to configure the JMS Topic, the JmsAdapter connection factory, as well as the composite so that the JMS Topic messages will be evenly distributed to same composite running off different SOA cluster nodes without causing duplication." Web Service Example - Part 3: Asynchronous Part 3 in this series from the Oracle ADF Mobile blog looks at "firing the web service asynchronously and then filling in the UI when it completes." Denis says, "This can be useful when you have data on the device in a local store and want to show that to the user while the application uses lazy loading from a web service to load more data." Advanced Oracle SOA Suite Oracle Open World 2012 SOA Presentations Oracle SOA & BPM Partner Community blogger Juergen Kress shares a list of 13 SOA presentations delivered or moderated by Oracle SOA Product Management at OOW12 in San Francisco. Oracle WebLogic Server WLS Domain Browser My colleague Jeff Davies, a frequent speaker at OTN Architect Day events and a genuinely nice guy, emailed me last night with this message: "I just came across this app on Google Play. It allows WebLogic administrators to browse WLS 12c domain information. I installed it on my phone and tried it out. Works very fast." I'm an iPhone guy, but I'm perfectly comfortable taking Jeff at his word. The app is called WLS Domain Browser. Follow the link for more info from the Google Play site. Retrieve Performance Data from SOA Infrastructure Database Another of the four blog posts published on Dec 4 by very busy Oracle Fusion Middleware A-Team member "fip," this one offers "examples of some basic SQL queries you can run against the infrastructure database of Oracle SOA Suite 11G to acquire the performance statistics for a given period of time." How to Achieve OC4J RMI Load Balancing "Having returned from a customer who faced challenges with OC4J RMI load balancing, I felt there is still some confusion in the field [about] how OC4J RMI load balancing works," says the Oracle Fusion Middleware A-Team member known only as "fip." "Hence I decide to dust off an old tech note that I wrote a few years back and share it with the general public." From XaaS to Java EE – Which damn cloud is right for me in 2012? Oracle ACE Director Markus Eisele wrestles with a timely technical issue and shares his observations on several of the alternatives. Exalogic 2.0.1 Tea Break Snippets - Creating a ModifyJeOS VirtualBox "One of the main advantages of this is that Templates can be created away from the Exalogic Environment," explains The Old Toxophilist. (BTW: I had to look it up: a toxophilist is one who collects bows and arrows.) ADF Mobile - Implementing Reusable Mobile Architecture "Reusability was always a strong part of ADF," says Oracle ACE Director Andrejus Baranovskis. "The same high reusability level is supported now in ADF Mobile." The objective of this post is "to prove technically that [the] reusable architecture concept works for ADF Mobile." Using BPEL Performance Statistics to Diagnose Performance Bottlenecks Someone had a busy day… This post, one of four published on DeC 4 by a member of the Oracle Fusion Middleware A-Team identified only as "fip," offers details on how to "enable, retrieve and interpret the performance statistics, before the future versions provides a more pleasant user experience." Thought for the Day "If you're afraid to change something it is clearly poorly designed." — Martin Fowler Source: SoftwareQuotes.com

    Read the article

  • Tap Into Tier 1 ERP

    - by Christine Randle
    By: Larry Simcox, Senior Director, Accelerate Corporate Programs     Your customers aren’t satisfied with so-so customer service. Your employees aren’t happy with below average salaries.   So why would you settle for second-rate or tier 2 ERP?   A recent report from Nucleus Research found that usability improvements and rapid implementation tools are simplifying deployments, putting tier 1 enterprise applications well within reach for midsize companies. So how can your business tap into the power of tier 1 ERP? And what are the best ways to manage a deployment?   The Reputation of ERP Implementations Overhauling internal operations and implementing ERP can be a challenging endeavor for organizations of all sizes. Midsize companies often shy away from enterprise-class ERP, fearing complexity, limited resources and perceived challenging deployments. Many forward thinking executives experienced ERP implementations in the late 90s and early 2000s and embrace a strategy to grow their business by investing in a foundation for innovation and growth via ERP modernization projects.   In recent years there has been a strong consumerization of IT with enterprise applications and their delivery methods evolving to become more user-friendly.  Today, usability improvements and modern implementation tools have made top-tier ERP solutions more accessible for growing companies. Nucleus found that because enterprise-class software can now be rapidly deployed, the payback is quicker, the risks are lower, the software is less disruptive and overall, companies can differentiate themselves from their competitors and achieve more success with the advantages these types of systems deliver.   Tapping into the power of tier 1 ERP can be made much easier with Oracle Accelerate solutions. Created by Oracle's expert partners and reviewed by Oracle, Oracle Accelerate solutions are simple to deploy, industry-specific, packaged solutions that provide a fast time to benefit, which means getting the right solution in place quickly, inexpensively with a controlled scope and predictable returns.   How are growing midsize companies successfully deploying tier 1 ERP? According to Nucleus Research, companies can increase success in their tier 1 ERP deployments by limiting customization, planning a rapid go-live, bettering communication across departments, and considering different delivery options. Oracle Accelerate solutions incorporate industry best practices and encourage rapid deployments. And even more, Nucleus found customers deploying tier 1 ERP with Oracle that had used Oracle Business Accelerators, Oracle’s rapid implementation tools, reduced the time to deploy Oracle E-Business Suite by at least 50 percent.   Industrial manufacturer L.H. Dottie is one company that needed ERP with enhanced capabilities to support its growth and streamline business processes. Using out-of-the-box configuration of Oracle E-Business Suite modules (provided by Oracle Business Accelerators and delivered by Oracle Partner C3 Business Solutions), L.H. Dottie was able to speed its implementation and went live in just six and a half months. With tier 1 ERP, the company was able to grow and do its business better, automating a variety of processes, accelerating product delivery and gaining powerful data analysis capabilities that helped drive its business into further regions. See more details about their ERP implementation here.   Tier 1 enterprise-class applications have proven to boost the success of Oracle’s midsize customers. As Nucleus Research iterates, companies poised for growth or seeking to compete against larger competitors absolutely can tap into the power of tier 1 ERP and position themselves as enterprise-class through leveraging Oracle Accelerate solutions.   You can learn more here about The Evolving Business Case for Tier - 1 ERP in Midsize Companies in our exclusive webcast with Nucleus.   ###  

    Read the article

  • Welcome Oracle Data Integration 12c: Simplified, Future-Ready Solutions with Extreme Performance

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 The big day for the Oracle Data Integration team has finally arrived! It is my honor to introduce you to Oracle Data Integration 12c. Today we announced the general availability of 12c release for Oracle’s key data integration products: Oracle Data Integrator 12c and Oracle GoldenGate 12c. The new release delivers extreme performance, increase IT productivity, and simplify deployment, while helping IT organizations to keep pace with new data-oriented technology trends including cloud computing, big data analytics, real-time business intelligence. With the 12c release Oracle becomes the new leader in the data integration and replication technologies as no other vendor offers such a complete set of data integration capabilities for pervasive, continuous access to trusted data across Oracle platforms as well as third-party systems and applications. Oracle Data Integration 12c release addresses data-driven organizations’ critical and evolving data integration requirements under 3 key themes: Future-Ready Solutions Extreme Performance Fast Time-to-Value       There are many new features that support these key differentiators for Oracle Data Integrator 12c and for Oracle GoldenGate 12c. In this first 12c blog post, I will highlight only a few:·Future-Ready Solutions to Support Current and Emerging Initiatives: Oracle Data Integration offer robust and reliable solutions for key technology trends including cloud computing, big data analytics, real-time business intelligence and continuous data availability. Via the tight integration with Oracle’s database, middleware, and application offerings Oracle Data Integration will continue to support the new features and capabilities right away as these products evolve and provide advance features. E    Extreme Performance: Both GoldenGate and Data Integrator are known for their high performance. The new release widens the gap even further against competition. Oracle GoldenGate 12c’s Integrated Delivery feature enables higher throughput via a special application programming interface into Oracle Database. As mentioned in the press release, customers already report up to 5X higher performance compared to earlier versions of GoldenGate. Oracle Data Integrator 12c introduces parallelism that significantly increases its performance as well. Fast Time-to-Value via Higher IT Productivity and Simplified Solutions:  Oracle Data Integrator 12c’s new flow-based declarative UI brings superior developer productivity, ease of use, and ultimately fast time to market for end users.  It also gives the ability to seamlessly reuse mapping logic speeds development.Oracle GoldenGate 12c ‘s Integrated Delivery feature automatically optimally tunes the process, saving time while improving performance. This is just a quick glimpse into Oracle Data Integrator 12c and Oracle GoldenGate 12c. On November 12th we will reveal much more about the new release in our video webcast "Introducing 12c for Oracle Data Integration". Our customer and partner speakers, including SolarWorld, BT, Rittman Mead will join us in launching the new release. Please join us at this free event to learn more from our executives about the 12c release, hear our customers’ perspectives on the new features, and ask your questions to our experts in the live Q&A. Also, please continue to follow our blogs, tweets, and Facebook updates as we unveil more about the new features of the latest release. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2) Now Available!

    - by Javier Puerta
    Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2) is now available on OTN on ALL platforms. This is the first major release since the launch of Enterprise Manager 12c in October of 2011 and the first ever Enterprise Manager release available on all platforms simultaneously. This is primarily a stability release which incorporates many of issues and feedback reported by early adopters. In addition, this release contains many new features and enhancements in areas across the board.   New Capabilities and Features   Enhanced management capabilities for enterprise private clouds: Introduces new capabilities to allow customers to build and manage a Java Platform-as-a-Service (PaaS) cloud based on Oracle Weblogic Server. The new capabilities include guided set up of PaaS Cloud, self-service provisioning, automatic scale out and metering and chargeback. Enhanced lifecycle management capabilities for Oracle WebLogic Server environments: Combining in-context multiple domain, patching and configuration file synchronizations. Integrated Hardware-Software management for Oracle Exalogic Elastic Cloud through features such as rack schematics visualization and integrated monitoring of all hardware and software components. The latest management capabilities for business-critical applications include: Business Application Management: A new Business Application (BA) target type and dashboard with flexible definitions provides a logical view of an application’s business transactions, end-user experiences and the cloud infrastructure the monitored application is running on. Enhanced User Experience Reporting: Oracle Real User Experience Insight has been enhanced to provide reporting capabilities on client-side issues for applications running in the cloud and has been more tightly coupled with Oracle Business Transaction Management to help ensure that real-time user experience and transaction tracing data is provided to users in context. Several key improvements address ease of administration, reporting and extensibility for massively scalable cloud environments including dynamic groups, self-updateable monitoring templates, bulk operations against many events, etc. New and Revised Plug-Ins:   Several plug-Ins have been updated as a part of this release resulting in either new versions or revisions. Revised plug-ins contain only bug-fixes and while new plug-ins incorporate both bug fixes as well as new functionality.   Plug-In Name Version Enterprise Manager for Oracle Database 12.1.0.2 (revision) Enterprise Manager for Oracle Fusion Middleware 12.1.0.3 (new) Enterprise Manager for Chargeback and Capacity Planning 12.1.0.3 (new) Enterprise Manager for Oracle Fusion Applications 12.1.0.3 (new) Enterprise Manager for Oracle Virtualization 12.1.0.3 (new) Enterprise Manager for Oracle Exadata 12.1.0.3 (new) Enterprise Manager for Oracle Cloud 12.1.0.4 (new) Installation and Upgrade:   All major platforms have been released simultaneously (Linux 32 / 64 bit, Solaris (SPARC), Solaris x86-64, IBM AIX 64-bit, and Windows x86-64 (64-bit) ) Enterprise Manager 12.1.0.2 is a complete release that includes both the EM OMS and Agent versions of 12.1.0.2. Installation options available with EM 12.1.0.2: User can do fresh Install or an upgrade from versions EM 10.2.0.5, 11.1, or 12.1.0.2 ( Bundle Patch 1 not mandatory). Upgrading to EM 12.1.0.2 from EM 12.1.0.1 is not a patch application (similar to Bundle Patch 1) but is achieved through a 1-system upgrade. Documentation:   Oracle Enterprise Manager Cloud Control Introduction Document provides a broad overview of capabilities and highlights"What's New" in EM 12.1.0.2.   All updated Oracle Enterprise Manager documentation can be found on OTN   Customer Webcast - EM 12c Installation and Upgrade: This webcast is for customers who are interested in learning how to successfully deploy or upgrade to EM 12.1.0.2.   Customer Webcast - Installation and Upgrade - September 21(registration and info on OTN starting September 12)   Enterprise Manager 12c R2 Resources:   OTN Download Page Upgrade Guide

    Read the article

  • Welcome to the Oracle Retail International Blog

    - by sarah.taylor(at)oracle.com
    Welcome to the first post of the new Oracle Retail International Blog. Retail is an international business and today's successful retailers view themselves in the context of a global market. A niche fashion business in Tokyo will learn marketing strategies from the luxury brands of Milan, an independent grocer in Oslo will source the same global brands as a supermarket in Oklahoma, and every retailer in the world will measure their multi-channel operation against the international e-commerce giant Amazon.  Why? Because today's customer is a global customer with unparalleled expectations on choice, price and service. Today's consumers have access to more information on retail than ever before. Technology allows people to shop from their home, their office or from the phone in their pocket, wherever they are and at whatever time suits them. Customers are using the web to search for products and promotions. They are also using the web to develop their voice in commenting on products and services that have delighted or disappointed. In an information rich industry, this customer element creates a new world of data. The best retailers are developing eagle eyes for reading customer activity and turning it into profitable decisions. Ultimately, whether you choose to compete or shop on price, service, product innovation, excellent operations or all of the above - the international world of retail has become an inspiration for all - retailer and consumer alike.  Retail as an industry is growing and diversifying at a faster rate than ever before. Yet it is still the customer who picks the winners and the losers on the retail field. Economic circumstances transform the rules, but it is still the customer who dictates the game, the pace, the price, and the perception of the brand. Wise retailers never rest on their laurels. They are always shopping for ideas on how to improve and differentiate the offer at every touch point to meet the customer's needs better than anyone else and to gain each customer's loyalty at a time when loyalty can be cheap. With this blog, I hope that we might provide a hub for discussion around what unifies retail and how technology supports both the retailer and customer experience. Despite the competitive nature of this market, we hope that this will provide an opportunity to share experiences and lessons learnt with a view that knowledge can only help this industry to grow and develop. At Oracle we've been supporting retailers for many years. Many of us have worked within retail organisations all over the world, myself included. With this in mind, I don't feel it is too bold a statement to say that Oracle understands retail. We wouldn't be so heavily integrated in some of the biggest and most well-known names in retail if we didn't. With this blog, we intend to create a community of international retailers that can exchange ideas and experiences, debate collective challenges and drive a better understanding of this continually evolving industry. Events such as the World Retail Congress and NRF's Big Show bring enormous value to the retail industry providing platforms for discussion and learning but they happen once a year. We wanted to create a platform for discussion on a different level and that like retail, is always on. We hope not only to bring commitment to being not only the infrastructure that brings all of their systems together within a retail business, but an infrastructure that supports the industry internationally to grow and flourish through creating a platform for networking, discussion, creativity, vision and strategy. Please feel free to ask questions or comment using the comments functionality.  You might also want to visit our other Oracle Retail social media sites: Facebook - http://www.facebook.com/oracleretail YouTube - http://www.youtube.com/user/oracleretail Twitter - http://twitter.com/#!/oracleretailInsight-Driven Retailing Blog - http://blogs.oracle.com/retail/

    Read the article

  • Silverlight Cream for December 07, 2010 -- #1004

    - by Dave Campbell
    In this Issue: András Velvárt, Kunal Chowdhury(-2-), AvraShow, Gill Cleeren, Ian T. Lackey, Richard Waddell, Joe McBride, Michael Crump, Xpert360, keyboardP, and Pete Vickers(-2-). Above the Fold: Silverlight: "Grouping Records in Silverlight DataGrid using PagedCollectionView" Kunal Chowdhury WP7: "Phone 7 Back Button and the ListPicker control" Ian T. Lackey Shoutouts: Colin Eberhardt has some Silverlight 5 Adoption Predictions you may want to check out. Michael Crump has a post up showing lots of the goodness of Silverlight 5 from the Firestarter... screenshots, code snippets, etc: Silverlight 5 – What’s New? (Including Screenshots & Code Snippets) Kunal Chowdhury has a pretty complete Silverlight 5 feature set from the Firestarter and an embedded copy of Scott Guthrie's kenote running on the page: New Features Announced for Silverlight 5 Beta From SilverlightCream.com: Just how productive is WP7 development compared to iOS, Android and mobile Web? András Velvárt blogged about a contest he took part in to build a WP7 app in 1-1/2 hours without any prior knowledge of it's funtion. He and his team-mate were pitted against other teams on Android, IOS, and mobile Web... guess who got (almost) their entire app running? ... just too cool Andras! ... Grouping Records in Silverlight DataGrid using PagedCollectionView Kunal Chowdhury has a couple good posts up, this first one is on using the PagedCollectionView to group the records in a DataGrid... code included. Filtering Records in Silverlight DataGrid using PagedCollectionView Kunal Chowdhury then continues with another post on the PagedCollectionView only this time is showing how to do some filtering. DeepZoom Tips and Techniques AvraShow has a post up discussing using DeepZoom to explore, in his case, a Printed Circuit Board, with information about how he proceeded in doing that, and some tips and techniques along the way. The validation story in Silverlight (Part 2) Gill Cleeren has Part 2 of his Silverlight Validation series up at SilverlightShow. This post gets into IDataErrorInfo and INotifyDataErrorInfo. Lots of code and the example is available for download. Phone 7 Back Button and the ListPicker control Ian T. Lackey has a post up about the WP7 backbutton and what can get a failure from the Marketplace in that area, and how that applies to the ListPicker as well. Very Simple Example of ICommand CanExecute Method and CanExecuteChanged Event Richard Waddell has a nice detailed tutorial on ICommand and dealing with CanExecute... lots of Blend love in this post. Providing an Alternating Background Color for an ItemsControl Joe McBride has a post up discussing putting an alternating background color on an ItemsControl... you know, how you do on a grid... interesting idea, and all the code... Pimp my Silverlight Firestarter Michael Crump has a great Firestarter post up ... where and how to get the videos, the labs... a good Firestarter resource for sure. Adventures with PivotViewer Part 7: Slider control Xpert360 has part 7 of the PivotViewer series they're doing up. This time they're demonstrating taking programmatic control of the Zoom slider. Creating Transparent Lockscreen Wallpapers for WP7 I don't know keyboardP's name, but he's got a cool post up about getting an image up for the WP7 lock screen that has transparent regions on it... pretty cool actually. Windows Phone 7 Linq to XML 'strangeness' Pete Vickers has a post up describing a problem he found with Linq to XML on WP7. He even has a demo app that has the problem, and the fix... and it's all downloadable. Windows Phone 7 multi-line radio buttons Pete Vickers has another quick post up on radio buttons with so much text that it needs wrapping ... this is for WP7, but applies to Silverlight in general. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Upgrading from MVC 1.0 to MVC2 in Visual Studio 2010 and VS2008.

    - by Sam Abraham
    With MVC2 officially released, I was involved in a few conversations regarding the feasibility of upgrading existing MVC 1.0 projects to quickly leverage the newly introduced MVC features. Luckily, Microsoft has proactively addressed this question for both Visual Studio 2008 and 2010 and many online resources discussing the upgrade process are a "Bing/Google Search" away. As I will happen to be speaking about MVC2 and Visual Studio 2010 at the Ft Lauderdale ArcSig .Net User Group Meeting on April 20th 2010 (Check http://www.fladotnet.com for more info.), I decided to include a quick demo on upgrading the NerdDinner project (which I consider the "Hello MVC World" project) from MVC 1.0 to MVC2 using Visual studio 2010 to demonstrate how simple the upgrade process is. In the next few lines, I will be briefly touching on upgrading to MVC2 for Visual Studio 2008 then discussing, in more detail, the upgrade process using Visual Studio 2010 while highlighting the advantage of its multi-targeting support. Using Visual Studio 2008 SP1 For upgrading to MVC2 Using VS2008 SP1, a Microsoft White Paper [1] presents two approaches:  1- Using a provided automated upgrade tool, 2-Manually upgrading the project. I personally prefer using the automated tool although it comes with an "AS IS" disclaimer. For those brave souls, or those who end up with no luck using the tool, detailed manual upgrade steps are also provided as a second option. Backing up the project in question is a must regardless of which route one would take to upgrade. Using Visual Studio 2010 Life is much easier for developers who already adopted Visual Studio 2010. Simply opening the MVC 1.0 solution file brings up the upgrade wizard as shown in figures 1, 2, 3 and 4. As we proceed with the upgrade process, the wizard requests confirmation on whether we choose to upgrade our target framework version to .Net 4.0 or keep the existing .Net 3.5 (Figure 5). VS2010 does a good job with multi-targeting where we can still develop .Net 3.5 applications while leveraging all the new bells and whistles that VS2010 brings to the table (Multi-targeting enables us to develop with as early as .Net 2.0 in VS2010) Figure 1 - Open Solution File Using VS2010   Figure 2 - VS2010 Conversion Wizard Figure 3- Ready To Convert To VS2010 Confirmation Screen Figure 4 - VS2010 Solution Conversion Progress Figure 5 - Confirm Target Framework Upgrade In an attempt to make my demonstration realistic, I decided to opt to keep the project targeted to the .Net 3.5 Framework.  After the successful completion of the conversion process,  a quick sanity check revealed that the NerdDinner project is still targeted to the .Net 3.5 framework as shown in figure 6. Inspecting the Web.Config revealed that the MVC DLL version our code compiles against has been successfully upgraded to 2.0 (Figure 7) and hence we should now be able to leverage the newly introduced features in MVC2 and VS2010 with no effort or time invested on modifying existing code. Figure 6- Confirm Target Framework Remained .Net 3.5  Figure 7 - Confirm MVC DLL Version Has Been Upgraded In Conclusion, Microsoft has empowered developers with the tools necessary to quickly and seamlessly upgrade their MVC solutions to the newly released MVC2. The multi-targeting feature in Visual Studio 2010 enables us to adopt this latest and greatest development tool while supporting development in as early as .Net 2.0. References 1. "Upgrading an ASP.NET MVC 1.0 Application to ASP.NET MVC 2" http://www.asp.net/learn/whitepapers/aspnet-mvc2-upgrade-notes

    Read the article

  • Python Coding standards vs. productivity

    - by Shroatmeister
    I work for a large humanitarian organisation, on a project building software that could help save lives in emergencies by speeding up the distribution of food. Many NGOs desperately need our software and we are weeks behind schedule. One thing that worries me in this project is what I think is an excessive focus on coding standards. We write in python/django and use a version of PEP0008, with various modifications e.g. line lengths can go up to 160 chars and all lines should go that long if possible, no blank lines between imports, line wrapping rules that apply only to certain kinds of classes, lots of templates that we must use, even if they aren't the best way to solve a problem etc. etc. One core dev spent a week rewriting a major part of the system to meet the then new coding standards, throwing away several suites of tests in the process, as the rewrite meant they were 'invalid'. We spent two weeks rewriting all the functionality that was lost, and fixing bugs. He is the lead dev and his word carries weight, so he has convinced the project manager that these standards are necessary. The junior devs do as they are told. I sense that the project manager has a strong feeling of cognitive dissonance about all this but nevertheless agrees with it vehemently as he feels unsure what else to do. Today I got in serious trouble because I had forgotten to put some spaces after commas in a keyword argument. I was literally shouted at by two other devs and the project manager during a Skype call. Personally I think coding standards are important but also think that we are wasting a lot of time obsessing with them, and when I verbalized this it provoked rage. I'm seen as a troublemaker in the team, a team that is looking for scapegoats for its failings. Since the introduction of the coding standards, the team's productivity has measurably plummeted, however this only reinforces the obsession, i.e. the lead dev simply blames our non-adherence to standards for the lack of progress. He believes that we can't read each other's code if we don't adhere to the conventions. This is starting to turn sticky. Now I am trying to modify various scripts, autopep8, pep8ify and PythonTidy to try to match the conventions. We also run pep8 against source code but there are so many implicit amendments to our standard that it's hard to track them all. The lead dev simple picks faults that the pep8 script doesn't pick up and shouts at us in the next stand-up meeting. Every week there are new additions to the coding standards that force us to rewrite existing, working, tested code. Thank heavens we still have tests, (I reverted some commits and fixed a bunch of the ones he removed). All the while there is increasing pressure to meet the deadline. I believe a fundamental issue is that the lead dev and another core dev refuse to trust other developers to do their job. But how to deal with that? We can't do our job because we are too busy rewriting everything. I've never encountered this dynamic in a software engineering team. Am I wrong to question their adherence to coding standards? Has anyone else experienced a similar situation and how have they dealt with it successfully? (I'm not looking for a discussion just actual solutions people have found)

    Read the article

  • RC of Entity Framework 4.1 (which includes EF Code First)

    - by ScottGu
    Last week the data team shipped the Release Candidate of Entity Framework 4.1.  You can learn more about it and download it here. EF 4.1 includes the new “EF Code First” option that I’ve blogged about several times in the past.  EF Code First provides a really elegant and clean way to work with data, and enables you to do so without requiring a designer or XML mapping file.  Below are links to some tutorials I’ve written in the past about it: Code First Development with Entity Framework 4.x EF Code First: Custom Database Schema Mapping Using EF Code First with an Existing Database The above tutorials were written against the CTP4 release of EF Code First (and so some APIs might be a little different) – but the concepts and scenarios outlined in them are the same as with the RC. Go Live License Last week’s EF 4.1 RC ships with a “go live” license that enables you to use it in production environments.  The final release of EF 4.1 will ship within the next 4 weeks and will be 100% API compatible with the RC release. Improvements with the RC The RC includes several improvements and enhancements.  The EF team has a good blog post summarizing the RC changes.  Scott Hanselman also has a nice video interview with the data team that talks more about the release. One of my favorite improvements introduced with last week’s RC is its support for medium trust security.  This enables you to use EF 4.1 (and code-first) within low-cost ASP.NET shared hosting web environments – without requiring a hoster to install anything to use it. EF 4.1 also now supports validation with not only code-first scenarios, but also model-first and database-first workflows.  Upgrading from previous releases The RC does include a few API tweaks and changes from the prior CTP builds.  Read the release notes that come with the release to get a more detailed listing of the changes. John Papa also has an excellent Upgrading to EF 4.1 RC blog post that describes the steps he took when upgrading a large project he wrote with the previous CTP5 release.  The work to upgrade is pretty straight forward and easy – use his write-up as a guide on how to quickly update projects of your own. NuGet Package Rename One of the changes that the data team made between the CTP5 and RC releases was to rename the NuGet package name from “EFCodeFirst” to “EntityFramework”. They decided to make this change since the EF 4.1 release now includes several additions above and beyond just code first. If you already have installed the “EFCodeFirst” NuGet package, you’ll want to uninstall it and then install the new “EntityFramework” NuGet package.  John Papa’s blog post details the exact steps on how to do this (it only takes ~20 seconds to do this). More EF Tutorials Julie Lerman has created some nice whitepapers and tutorials for MSDN that show using the new EF4 and EF 4.1 feature set. Click here to find links to read and watch them. Summary I’m really excited about the EF 4.1 release that will be shipping next month.  It significantly improves the Entity Framework, and makes it even easier and cleaner to work with data inside of .NET.  You can take advantage of it within all ASP.NET projects (including both Web Forms and MVC), within client projects using Windows Forms and WPF, and within other project types like WCF, Console and Services.  You can use NuGet to easily install it within all of them. Hope this helps, Scott P.S. I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Bridging Two Worlds: Big Data and Enterprise Data

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The big data world is all the vogue in today’s IT conversations. It’s a world of volume, velocity, variety – tantalizing us with its untapped potential. It’s a world of transformational game-changing technologies that have already begun to alter the information management landscape. One of the reasons that big data is so compelling is that it’s a universal challenge that impacts every one of us. Whether it is healthcare, financial, manufacturing, government, retail - big data presents a pressing problem for many industries: how can so much information be processed so quickly to deliver the ‘bigger’ picture? With big data we’re tapping into new information that didn’t exist before: social data, weblogs, sensor data, complex content, and more. What also makes big data revolutionary is that it turns traditional information architecture on its head, putting into question commonly accepted notions of where and how data should be aggregated processed, analyzed, and stored. This is where Hadoop and NoSQL come in – new technologies which solve new problems for managing unstructured data. And now for some worst practices that I'd recommend that you please not follow: Worst Practice Lesson 1: Throw away everything that you already know about data management, data integration tools, and start completely over. One shouldn’t forget what’s already running in today’s IT. Today’s Business Analytics, Data Warehouses, Business Applications (ERP, CRM, SCM, HCM), and even many social, mobile, cloud applications still rely almost exclusively on structured data – or what we’d like to call enterprise data. This dilemma is what today’s IT leaders are up against: what are the best ways to bridge enterprise data with big data? And what are the best strategies for dealing with the complexities of these two unique worlds? Worst Practice Lesson 2: Throw away all of your existing business applications … because they don’t run on big data yet. Bridging the two worlds of big data and enterprise data means considering solutions that are complete, based on emerging Hadoop technologies (as well as traditional), and are poised for success through integrated design tools, integrated platforms that connect to your existing business applications, as well as and support real-time analytics. Leveraging these types of best practices translates to improved productivity, lowered TCO, IT optimization, and better business insights. Worst Practice Lesson 3: Separate out [and keep separate] your big data sandboxes from all the current enterprise IT systems. Don’t mix sand among playgrounds. We didn't tell you that you wouldn't get dirty doing this. Correlation between the two worlds is key. The real advantage to analyzing big data comes when you can correlate it with the existing data in your data warehouse or your current applications to make sense of the larger patterns. If you have not followed these worst practices 1-3 then you qualify for the first step of our journey: bridging the two worlds of enterprise data and big data. Over the next several weeks we’ll be discussing this topic along with several others around big data as it relates to data integration. We welcome you to join us in the conversation by following us on twitter on #BridgingBigData or download our latest white paper and resource kit: Big Data and Enterprise Data: Bridging Two Worlds.

    Read the article

  • Windows Phone 8, possible tablets and what the latest update might mean

    - by Roger Hart
    Microsoft have just announced an update to Windows Phone 8. As one of the five, maybe six people who actually bought a WP8 handset I found this interesting. Then I read the blog post about it, and rushed off to write somewhat less than a thousand words about a single picture. The blog post announces an extra column of tiles on the start screen, and support for higher resolutions. If we ignore all the usual flummery about how this will make your life better, that (and the rotation lock) sounds a little like stage setting for tablets. Looking at the preview screenshot, I started to wonder. What it’s called Phablet_5F00_StartScreenProductivity_5F00_01_5F00_072A1240.jpg Pretty conclusive. If you can brand something a “phablet” and sleep at night you’re made of sterner stuff than I am, but that’s beside the point. It’s explicit in the post that Microsoft are expecting a broader range of form factors for WP8, but they stop short of quite calling out tablet size. The extra columns and resolution definitely back that up, so why stop at a 6 inch “phablet”? Sadly, the string of numbers there don’t really look like a Lumia model number – that would be a bit tendentious even for a speculative blog post about a single screenshot. “Productivity” is interesting too. I get into this a bit more below, but this is a pretty clear pitch for a business device. What it looks like Something that would look quite decent on a 7 inch screen, but something a bit too vertical to go toe-to-toe with the Surface. Certainly, it would look a lot better on a large-factor phone than any of the current models. Those tiles are going to get cramped and a bit ugly if the handsets aren’t getting bigger. What’s on it You have a bunch of missed calls, you rarely text, use a stocks app, and your budget spreadsheet and meeting notes are a thumb-reach away. Outlook is your main form of email. You care enough about LinkedIn to not only install its app but give it a huge live tile. There’s no beating about the bush here, the implicit persona is a corporate exec. With Nokia in the bag and Blackberry pushing daisies, that may not be a stupid play. There’s almost certainly a niche there if they can parlay their corporate credentials into filling it before BYOD (which functionally means an iPhone) reaches the late adopters. The really quite slick WP8 Office implementation ought to help here. This is the face they’ve chosen to present, the cultural milieu they’re normalizing for Windows Phone. It’s an iPhone for Serious Business Grown-ups. Could work, I guess. Does it mean anything? Is the latest WP8 update a sign that we can expect to see tablets running Windows Phone rather than WinRT? Well, WinRT tablets haven’t exactly taken off but I’m not quite going to make a leap like that just from a file name and a column of icons. I feel pretty safe, however, conjecturing that Microsoft would like to squeeze a WP8 “phablet” into the palm of every exec who’s ever grumbled about their Blackberry, and this release might get them a bit closer. If it works well incrementing up to larger devices, then that could be a fair hedge against WinRt crashing and burning any harder in the marketplace.

    Read the article

  • Guessing Excel Data Types

    - by AjarnMark
    Note to Self HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Jet\4.0\Engines\Excel: TypeGuessRows = 0 means scan everything. Note to Others About 10 years ago I stumbled across this bit of information just when I needed it and it saved my project.  Then for some reason, a few years later when it would have been nice, but not critical, for some reason I could not find it again anywhere.  Well, now I have stumbled across it again, and to preserve my future self from nightmares and sudden baldness due to pulling my hair out, I have decided to blog it in the hopes that I can find it again this way. Here’s the story…  When you query data from an Excel spreadsheet, such as with old-fashioned DTS packages in SQL 2000 (my first reference) or simply with an OLEDB Data Adapter from ASP.NET (recent task) and if you are using the Microsoft Jet 4.0 driver (newer ones may deal with this differently) then you can get funny results where the query reports back that a cell value is null even when you know it contains data. What happens is that Excel doesn’t really have data types.  While you can format information in cells to appear like certain data types (e.g. Date, Time, Decimal, Text, etc.) that is not really defining the cell as being of a certain type like we think of when working with databases.  But, presumably, to make things more convenient for the user (programmer) when you issue a query against Excel, the query processor tries to guess what type of data is contained in each column and returns it in an appropriate manner.  This is all well and good IF your data is consistent in every row and matches what the processor guessed.  And, for efficiency’s sake, when the query processor is trying to figure out each column’s data type, it does so by analyzing only the first 8 rows of data (default setting). Now here’s the problem, suppose that your spreadsheet contains information about clothing, and one of the columns is Size.  Now suppose that in the first 8 rows, all of your sizes look like 32, 34, 18, 10, and so on, using numbers, but then, somewhere after the 8th row, you have some rows with sizes like S, M, L, XL.  What happens is that by examining only the first 8 rows, the query processor inferred that the column contained numerical data, and then when it hits the non-numerical data in later rows, it comes back blank.  Major bummer, and a real pain to track down if you don’t know that Excel is doing this, because you study the spreadsheet and say, “the data is RIGHT THERE!  WHY doesn’t the query see it?!?!”  And the hair-pulling begins. So, what’s a developer to do?  One option is to go to the registry setting noted above and change the DWORD value of TypeGuessRows from the default of 8 to 0 (zero).  Setting this value to zero will force Jet to scan every row in the spreadsheet before making its determination as to what type of data the column contains.  And that means that in the example above, it would have treated the column as a string rather than as numeric, and presto! your query now returns all of the values that you know are in there. Of course, there is a caveat… if you are querying large spreadsheets, making Jet scan every row can be quite a performance hit.  You could enter a different number (more than 8) that you believe is a better sampling of rows to make the guess, but you still have the possibility that every row scanned looks alike, but that later rows are different, and that you might get blanks when there really is data there.  That’s the type of gamble, I really don’t like to take with my data. Anyone with a better approach, or with experience with more recent drivers that have a better way of handling data types, please chime in!

    Read the article

  • Silverlight Cream for February 21, 2011 -- #1049

    - by Dave Campbell
    In this Issue: Rob Eisenberg(-2-), Gill Cleeren, Colin Eberhardt, Alex van Beek, Ishai Hachlili, Ollie Riches, Kevin Dockx, WindowsPhoneGeek(-2-), Jesse Liberty(-2-), and John Papa. Above the Fold: Silverlight: "Silverlight 4: Creating useful base classes for your views and viewmodels with PRISM 4" Alex van Beek WP7: "Google Sky on Windows Phone 7" Colin Eberhardt Shoutouts: My friends at SilverlightShow have their top 5 for last week posted: SilverlightShow for Feb 14 - 20, 2011 From SilverlightCream.com: Rob Eisenberg MVVMs Us with Caliburn.Micro! Rob Eisenberg chats with Carl and Richard on .NET Rocks episode 638 about Caliburn.Micro which takes Convention-over-Configuration further, utilizing naming conventions to handle a large number of data binding, validation and other action-based characteristics in your app. Two Caliburn Releases in One Day! Rob Eisenberg also announced that release candidates for both Caliburn 2.0 and Caliburn.Micro 1.0 are now available. Check out the docs and get the bits. Getting ready for Microsoft Silverlight Exam 70-506 (Part 6) Gill Cleeren has Part 6 of his series on getting ready for the Silverlight Exam up at SilverlightShow.... this time out, Gill is discussing app startup, localization, and using resource dictionaries, just to name a few things. Google Sky on Windows Phone 7 Colin Eberhardt has a very cool WP7 app described where he's using Google Sky as the tile source for Bing Maps, and then has a list of 110 Messier Objects.. interesting astronomical objects that you can look at... all with source! Silverlight 4: Creating useful base classes for your views and viewmodels with PRISM 4 Alex van Beek has some Prism4/Unity MVVM goodness up with this discussion of a login module using View and ViewModel base classes. Windows Phone 7 and WCF REST – Authentication Solutions Ishai Hachlili sent me this link to his post about WCF REST web service and authentication for WP7, and he offers up 2 solutions... from the looks of this, I'm also putting his blog on my watch list WP7Contrib: Isolated Storage Cache Provider Ollie Riches has a complete explanation and code example of using the IsolatedStorageCacheProvider in their WP7Contrib library. Using a ChannelFactory in Silverlight, part two: binary cows & new-born calves Kevin Dockx follows-up his post on Channel Factories with this part 2, expanding the knowledge-base into usin parameters and custom binding with binary encoding, both from reader suggestions. All about UriMapping in WP7 WindowsPhoneGeek has a post up about URI mappings in WP7 ... what it is, how to enable it in code behind or XAML, then using it either with a hyperlink button or via the NavigationService class... all with code. Passing WP7 Memory Consumption requirements with the Coding4Fun MemoryCounter tool WindowsPhoneGeek's latest is a tutorial on the use of the Memory Counter control from the Coding4Fun toolkit and WP7 Memory consumption. Getting Started With Linq Jesse Liberty gets into LINQ in his Episode 33 of his WP7 'From Scratch' series... looks like a good LINQ starting point, and he's going to be doing a series on it. Linq with Objects In his second post on LINQ, Jesse Liberty is looking at creating a Linq query against a collection of objects... always good stuff, Jesse! Silverlight TV Silverlight TV 62: The Silverlight 5 Triad Unplugged John Papa is joined by Sam George, Larry Olson, and Vijay Devetha (the Silverlight Triad) on this Silverlight TV episode 62 to discuss how the team works together, and hey... they're hiring! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

< Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >