Search Results

Search found 29235 results on 1170 pages for 'dynamic management objects'.

Page 950/1170 | < Previous Page | 946 947 948 949 950 951 952 953 954 955 956 957  | Next Page >

  • Altering a Column Which has a Default Constraint

    - by Dinesh Asanka
    Setting up a default column is a common task for  developers.  But, are we naming those default constraints explicitly? In the below  table creation, for the column, sys_DateTime the default value Getdate() will be allocated. CREATE TABLE SampleTable (ID int identity(1,1), Sys_DateTime Datetime DEFAULT getdate() ) We can check the relevant information from the system catalogs from following query. SELECT sc.name TableName, dc.name DefaultName, dc.definition, OBJECT_NAME(dc.parent_object_id) TableName, dc.is_system_named  FROM sys.default_constraints dc INNER JOIN sys.columns sc ON dc.parent_object_id = sc.object_id AND dc.parent_column_id = sc.column_id and results would be: Most of the above columns are self-explanatory. The last column, is_system_named, is to identify whether the default name was given by the system. As you know, in the above case, since we didn’t provide  any default name, the  system will generate a default name for you. But the problem with these names is that they can differ from environment to environment.  If example if I create this table in different table the default name could be DF__SampleTab__Sys_D__7E6CC920 Now let us create another default and explicitly name it: CREATE TABLE SampleTable2 (ID int identity(1,1), Sys_DateTime Datetime )   ALTER TABLE SampleTable2 ADD CONSTRAINT DF_sys_DateTime_Getdate DEFAULT( Getdate()) FOR Sys_DateTime If we run the previous query again we will be returned the below output. And you can see that last created default name has 0 for is_system_named. Now let us say I want to change the data type of the sys_DateTime column to something else: ALTER TABLE SampleTable2 ALTER COLUMN Sys_DateTime Date This will generate the below error: Msg 5074, Level 16, State 1, Line 1 The object ‘DF_sys_DateTime_Getdate’ is dependent on column ‘Sys_DateTime’. Msg 4922, Level 16, State 9, Line 1 ALTER TABLE ALTER COLUMN Sys_DateTime failed because one or more objects access this column. This means, you need to drop the default constraint before altering it: ALTER TABLE [dbo].[SampleTable2] DROP CONSTRAINT [DF_sys_DateTime_Getdate] ALTER TABLE SampleTable2 ALTER COLUMN Sys_DateTime Date   ALTER TABLE [dbo].[SampleTable2] ADD CONSTRAINT [DF_sys_DateTime_Getdate] DEFAULT (getdate()) FOR [Sys_DateTime] If you have a system named default constraint that can differ from environment to environment and so you cannot drop it as before, you can use the below code template: DECLARE @defaultname VARCHAR(255) DECLARE @executesql VARCHAR(1000)   SELECT @defaultname = dc.name FROM sys.default_constraints dc INNER JOIN sys.columns sc ON dc.parent_object_id = sc.object_id AND dc.parent_column_id = sc.column_id WHERE OBJECT_NAME (parent_object_id) = 'SampleTable' AND sc.name ='Sys_DateTime' SET @executesql = 'ALTER TABLE SampleTable DROP CONSTRAINT ' + @defaultname EXEC( @executesql) ALTER TABLE SampleTable ALTER COLUMN Sys_DateTime Date ALTER TABLE [dbo].[SampleTable] ADD DEFAULT (Getdate()) FOR [Sys_DateTime]

    Read the article

  • Adding Attributes to Generated Classes

    ASP.NET MVC 2 adds support for data annotations, implemented via attributes on your model classes.  Depending on your design, you may be using an OR/M tool like Entity Framework or LINQ-to-SQL to generate your entity classes, and you may further be using these entities directly as your Model.  This is fairly common, and alleviates the need to do mapping between POCO domain objects and such entities (though there are certainly pros and cons to using such entities directly). As an example, the current version of the NerdDinner application (available on CodePlex at nerddinner.codeplex.com) uses Entity Framework for its model.  Thus, there is a NerdDinner.edmx file in the project, and a generated NerdDinner.Models.Dinner class.  Fortunately, these generated classes are marked as partial, so you can extend their behavior via your own partial class in a separate file.  However, if for instance the generated Dinner class has a property Title of type string, you cant then add your own Title of type string for the purpose of adding data annotations to it, like this: public partial class Dinner { [Required] public string Title { get;set; } } This will result in a compilation error, because the generated Dinner class already contains a definition of Title.  How then can we add attributes to this generated code?  Do we need to go into the T4 template and add a special case that says if were generated a Dinner class and it has a Title property, add this attribute?  Ick. MetadataType to the Rescue The MetadataType attribute can be used to define a type which contains attributes (metadata) for a given class.  It is applied to the class you want to add metadata to (Dinner), and it refers to a totally separate class to which youre free to add whatever methods and properties you like.  Using this attribute, our partial Dinner class might look like this: [MetadataType(typeof(Dinner_Validation))] public partial class Dinner {}   public class Dinner_Validation { [Required] public string Title { get; set; } } In this case the Dinner_Validation class is public, but if you were concerned about muddying your API with such classes, it could instead have been created as a private class within Dinner.  Having the validation attributes specified in their own class (with no other responsibilities) complies with the Single Responsibility Principle and makes it easy for you to test that the validation rules you expect are in place via these annotations/attributes. Thanks to Julie Lerman for her help with this.  Right after she showed me how to do this, I realized it was also already being done in the project I was working on. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Which web framework to use under Backbonejs?

    - by egidra
    For a previous project, I was using Backbonejs alongside Django, but I found out that I didn't use many features from Django. So, I am looking for a lighter framework to use underneath a Backbonejs web app. I never used Django built in templates. When I did, it was to set up the initial index page, but that's all. I did use the user management system that Django provided. I used the models.py, but never views.py. I used urls.py to set up which template the user would hit upon visiting the site. I noticed that the two features that I used most from Django was South and Tastypie, and they aren't even included with Django. Particularly, django-tastypie made it easy for me to link up my frontend models to my backend models. It made it easy to JSONify my front end models and send them to Tastypie. Although, I found myself overriding a lot of tastypie's methods for GET, PUT, POST requests, so it became useless. South made it easy to migrate new changes to the database. Although, I had so much trouble with South. Is there a framework with an easier way of handling database modifications than using South? When using South with multiple people, we had the worse time keeping our databases synced. When someone added a new table and pushed their migration to git, the other two people would spend days trying to use South's automatic migration, but it never worked. I liked how Rails had a manual way of migrating databases. Even though I used Tastypie and South a lot, I found myself not actually liking them because I ended up overriding most Tastypie methods for each Resource, and I also had the worst trouble migrating new tables and columns with South. So, I would like a framework that makes that process easier. Part of my problem was that they are too "magical". Which framework should I use? Nodejs or a lighter Python framework? Which works best with my above criteria?

    Read the article

  • How to Use Your Android Phone as a Modem; No Rooting Required

    - by Jason Fitzpatrick
    If your cellular provider’s mobile hotspot/tethering plans are too pricey, skip them and tether your phone to your computer without inflating your monthly bill. Read on to see how you can score free mobile internet. We recently received a letter from a How-To Geek reader, requesting help linking their Android phone to their laptop to avoid the highway robbery their cellular provider was insisting upon: Dear How-To Geek, I recently found out that my cellphone company charges $30 a month to use your smartphone as a data modem. That’s an outrageous price when I already pay an extra $15 a month charge just because they insist that because I have a smartphone I need a data plan because I’ll be using so much more data than other users. They expect me to pay what amounts to a $45 a month premium just to do some occasional surfing and email checking from the comfort of my laptop instead of the much smaller smartphone screen! Surely there is a work around? I’m running Windows 7 on my laptop and I have an Android phone running Android OS 2.2. Help! Sincerely, No Double Dipping! Well Double Dipping, this is a sentiment we can strongly related to as many of us on staff are in a similar situation. It’s absurd that so many companies charge you to use the data connection on the phone you’re already paying for. There is no difference in bandwidth usage if you stream Pandora to your phone or to your laptop, for example. Fortunately we have a solution for you. It’s not free-as-in-beer but it only costs $16 which, over the first year of use alone, will save you $344. Let’s get started! Latest Features How-To Geek ETC What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Make Efficient Use of Tab Bar Space by Customizing Tab Width in Firefox See the Geeky Work Done Behind the Scenes to Add Sounds to Movies [Video] Use a Crayon to Enhance Engraved Lettering on Electronics Adult Swim Brings Their Programming Lineup to iOS Devices Feel the Chill of the South Atlantic with the Antarctica Theme for Windows 7 Seas0nPass Now Offers Untethered Apple TV Jailbreaking

    Read the article

  • Seizing the Moment with Mobility

    - by Kathryn Perry
    A guest post by Hernan Capdevila, Vice President, Oracle Fusion Apps Mobile devices are forcing a paradigm shift in the workplace – they’re changing the way businesses can do business and the type of cultures they can nurture. As our customers talk about their mobile needs, we hear them saying they want instant-on access to enterprise data so workers can be more effective at their jobs anywhere, anytime. They also are interested in being more cost effective from an IT point of view. The mobile revolution – with the idea of BYOD (bring your own device) – has added an interesting dynamic because previously IT was driving the employee device strategy and ecosystem. That's been turned on its head with the consumerization of IT. Now employees are figuring out how to use their personal devices for work purposes and IT has to figure out how to adapt. Blurring the Lines between Work and Personal Life My vision of where businesses will be five years from now is that our work lives and personal lives will be more interwoven together. In turn, enterprises will have to determine how to make employees’ work lives fit more into the fabric of their personal lives. And personal devices like smartphones are going to drive significant business value because they let us accomplish things very incrementally. I can be sitting on a train or in a taxi and be productive. At the end of any meeting, I can capture ideas and tasks or follow up with people in real time. Mobile devices enable this notion of seizing the moment – capitalizing on opportunities that might otherwise have slipped away because we're not connected. For the industry shapers out there, this is game changing. The lean and agile workforce is definitely the future. This notion of the board sitting down with the executive team to lay out strategic objectives for a three- to five-year plan, bringing in HR to determine how they're going to staff the strategic activities, kicking off the execution, and then revisiting the plan in three to five years to create another three- to five-year plan is yesterday's model. Businesses that continue to approach innovating in that way are in the dinosaur age. Today it's about incremental planning and incremental execution, which requires a lot of cohesion and synthesis within the workforce. There needs to be this interweaving notion within the workforce about how ideas cascade down, how people engage, how they stay connected, and how insights are shared. How to Survive and Thrive in Today’s Marketplace The notion of Facebook isn’t new. We lived it pre-Internet days with America Online and Prodigy – Facebook is just the renaissance of these services in a more viral and pervasive way. And given the trajectory of the consumerization of IT with people bringing their personal tooling to work, the enterprise has no option but to adapt. The sooner that businesses realize this from a top-down point of view the sooner that they will be able to really drive significant innovation and adapt to the marketplace. There are a small number of companies right now (I think it's closer to 20% rather than 80%, but the number is expanding) that are able to really innovate in this incremental marketplace. So from a competitive point of view, there's no choice but to be social and stay connected. By far the majority of users on Facebook and LinkedIn are mobile users – people on iPhones, smartphones, Android phones, and tablets. It's not the couch people, right? It's the on-the-go people – those people at the coffee shops. Usually when you're sitting at your desk on a big desktop computer, typically you have better things to do than to be on Facebook. This is a topic I'm extremely passionate about because I think mobile devices are game changing. Mobility delivers significant value to businesses – it also brings dramatic simplification from a functional point of view and transforms our work life experience. Hernan CapdevilaVice President, Oracle Applications Development

    Read the article

  • SQL SERVER – How to easily work with Database Diagrams

    - by Pinal Dave
    Databases are very widely used in the modern world. Regardless of the complexity of a database, each one requires in depth designing. To practice along please Download dbForge Studio now.  The right methodology of designing a database is based on the foundations of data normalization, according to which we should first define database’s key elements – entities. Afterwards the attributes of entities and relations between them are determined. There is a strong opinion that the process of database designing should start with a pencil and a blank sheet of paper. This might look old-fashioned nowadays, because SQL Server provides a much wider functionality for designing databases – Database Diagrams. When using SSMS for working with Database Diagrams I realized two things – on the one hand, visualization of a scheme allows designing a database more efficiently; on the other – when it came to creating a big scheme, some difficulties occurred when designing with SSMS. The alternatives haven’t taken long to wait and dbForge Studio for SQL Server is one of them. Its functions offer more advantages for working with Database Diagrams. For example, unlike SSMS, dbForge Studio supports an opportunity to drag-and-drop several tables at once from the Database Explorer. This is my opinion but personally I find this option very useful. Another great thing is that a diagram can be saved as both a graphic file and a special XML file, which in case of identical environment can be easily opened on the other server for continuing the work. During working with dbForge Studio it turned out that it offers a wide set of elements to operate with on the diagram. Noteworthy among such elements are containers which allow aggregating diagram objects into thematic groups. Moreover, you can even place an image directly on the diagram if the scheme design is based on a standard template. Each of the development environments has a different approach to storing a diagram (for example, SSMS stores them on a server-side, whereas dbForge Studio – in a local file). I haven’t found yet an ability to convert existing diagrams from SSMS to dbForge Studio. However I hope Devart developers will implement this feature in one of the following releases. All in all, editing Database Diagrams through dbForge Studio was a nice experience and allowed speeding-up the common database designing tasks. Download dbForge Studio now. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL

    Read the article

  • Windows Azure Recipe: Big Data

    - by Clint Edmonson
    As the name implies, what we’re talking about here is the explosion of electronic data that comes from huge volumes of transactions, devices, and sensors being captured by businesses today. This data often comes in unstructured formats and/or too fast for us to effectively process in real time. Collectively, we call these the 4 big data V’s: Volume, Velocity, Variety, and Variability. These qualities make this type of data best managed by NoSQL systems like Hadoop, rather than by conventional Relational Database Management System (RDBMS). We know that there are patterns hidden inside this data that might provide competitive insight into market trends.  The key is knowing when and how to leverage these “No SQL” tools combined with traditional business such as SQL-based relational databases and warehouses and other business intelligence tools. Drivers Petabyte scale data collection and storage Business intelligence and insight Solution The sketch below shows one of many big data solutions using Hadoop’s unique highly scalable storage and parallel processing capabilities combined with Microsoft Office’s Business Intelligence Components to access the data in the cluster. Ingredients Hadoop – this big data industry heavyweight provides both large scale data storage infrastructure and a highly parallelized map-reduce processing engine to crunch through the data efficiently. Here are the key pieces of the environment: Pig - a platform for analyzing large data sets that consists of a high-level language for expressing data analysis programs, coupled with infrastructure for evaluating these programs. Mahout - a machine learning library with algorithms for clustering, classification and batch based collaborative filtering that are implemented on top of Apache Hadoop using the map/reduce paradigm. Hive - data warehouse software built on top of Apache Hadoop that facilitates querying and managing large datasets residing in distributed storage. Directly accessible to Microsoft Office and other consumers via add-ins and the Hive ODBC data driver. Pegasus - a Peta-scale graph mining system that runs in parallel, distributed manner on top of Hadoop and that provides algorithms for important graph mining tasks such as Degree, PageRank, Random Walk with Restart (RWR), Radius, and Connected Components. Sqoop - a tool designed for efficiently transferring bulk data between Apache Hadoop and structured data stores such as relational databases. Flume - a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large log data amounts to HDFS. Database – directly accessible to Hadoop via the Sqoop based Microsoft SQL Server Connector for Apache Hadoop, data can be efficiently transferred to traditional relational data stores for replication, reporting, or other needs. Reporting – provides easily consumable reporting when combined with a database being fed from the Hadoop environment. Training These links point to online Windows Azure training labs where you can learn more about the individual ingredients described above. Hadoop Learning Resources (20+ tutorials and labs) Huge collection of resources for learning about all aspects of Apache Hadoop-based development on Windows Azure and the Hadoop and Windows Azure Ecosystems SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • The importance of Unit Testing in BI

    - by Davide Mauri
    One of the main steps in the process we internally use to develop a BI solution is the implementation of Unit Test of you BI Data. As you may already know, I’ve create a simple (for now) tool that leverages NUnit to allow us to quickly create Unit Testing without having to resort to use Visual Studio Database Professional: http://queryunit.codeplex.com/ Once you have a tool like this one, you can start also to make sure that your BI solution (DWH and CUBE) is not only structurally sound (I mean, the cube or the report gets processed correctly), but you can also check that the logical integrity of your business rules is enforced. For example let’s say that the customer tell you that they will never create an invoice for a specific product-line in 2010 since that product-line is dismissed and will never be sold again. Ok we know that this in theory is true, but a lot of this business rule effectiveness depends on the fact the people does not do a mistake while inserting new orders/invoices and the ERP used implements a check for this business logic. Unfortunately these last two hypotesis are not always true, so you may find yourself really having some invoices for a product line that doesn’t exists anymore. Maybe this kind of situation in future will be solved using Master Data Management but, meanwhile, how you can give and idea of the data quality to your customers? How can you check that logical integrity of the analytical data you produce is exactly what you expect? Well, Unit Testing of a DWH or a CUBE can be a solution. Once you have defined your test suite, by writing SQL and MDX queries that checks that your data is what you expect to be, if you use NUnit (and QueryUnit does), you can then use a tool like NUnit2Report to create a nice HTML report that can be shipped via email to give information of data quality: In addition to that, since NUnit produces an XML file as a result, you can also import it into a SQL Server Database and then monitor the quality of data over time. I’ll be speaking about this approach (and more in general about how to “engineer” a BI solution) at the next European SQL PASS Adaptive BI Best Practices http://www.sqlpass.org/summit/eu2010/Agenda/ProgramSessions/AdaptiveBIBestPratices.aspx I’ll enjoy discussing with you all about this, so see you there! And remember: “if ain't tested it's broken!” (Sorry I don’t remember how said that in first place :-)) Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Internal Mutation of Persistent Data Structures

    - by Greg Ros
    To clarify, when I mean use the terms persistent and immutable on a data structure, I mean that: The state of the data structure remains unchanged for its lifetime. It always holds the same data, and the same operations always produce the same results. The data structure allows Add, Remove, and similar methods that return new objects of its kind, modified as instructed, that may or may not share some of the data of the original object. However, while a data structure may seem to the user as persistent, it may do other things under the hood. To be sure, all data structures are, internally, at least somewhere, based on mutable storage. If I were to base a persistent vector on an array, and copy it whenever Add is invoked, it would still be persistent, as long as I modify only locally created arrays. However, sometimes, you can greatly increase performance by mutating a data structure under the hood. In more, say, insidious, dangerous, and destructive ways. Ways that might leave the abstraction untouched, not letting the user know anything has changed about the data structure, but being critical in the implementation level. For example, let's say that we have a class called ArrayVector implemented using an array. Whenever you invoke Add, you get a ArrayVector build on top of a newly allocated array that has an additional item. A sequence of such updates will involve n array copies and allocations. Here is an illustration: However, let's say we implement a lazy mechanism that stores all sorts of updates -- such as Add, Set, and others in a queue. In this case, each update requires constant time (adding an item to a queue), and no array allocation is involved. When a user tries to get an item in the array, all the queued modifications are applied under the hood, requiring a single array allocation and copy (since we know exactly what data the final array will hold, and how big it will be). Future get operations will be performed on an empty cache, so they will take a single operation. But in order to implement this, we need to 'switch' or mutate the internal array to the new one, and empty the cache -- a very dangerous action. However, considering that in many circumstances (most updates are going to occur in sequence, after all), this can save a lot of time and memory, it might be worth it -- you will need to ensure exclusive access to the internal state, of course. This isn't a question about the efficacy of such a data structure. It's a more general question. Is it ever acceptable to mutate the internal state of a supposedly persistent or immutable object in destructive and dangerous ways? Does performance justify it? Would you still be able to call it immutable? Oh, and could you implement this sort of laziness without mutating the data structure in the specified fashion?

    Read the article

  • Unit Testing Framework for XQuery

    - by Knut Vatsendvik
    This posting provides a unit testing framework for XQuery using Oracle Service Bus. It allows you to write a test case to run your XQuery transformations in an automated fashion. When the test case is run, the framework returns any differences found in the response. The complete code sample with install instructions can be downloaded from here. Writing a Unit Test You start a new Test Case by creating a Proxy Service from Workshop that comes with Oracle Service Bus. In the General Configuration page select Service Type to be Messaging Service           In the Message Type Configuration page link both the Request & Response Message Type to the TestCase element of the UnitTest.xsd schema                 The TestCase element consists of the following child elements The ID and optional Name element is simply used for reference. The Transformation element is the XQuery resource to be executed. The Input elements represents the input to run the XQuery with. The Output element represents the expected output. These XML documents are “also” represented as an XQuery resource where the XQuery function takes no arguments and returns the XML document. Why not pass the test data with the TestCase? Passing an XML structure in another XML structure is not very easy or at least not very human readable. Therefore it was chosen to represent the test data as an loadable resource in the OSB. However you are free to go ahead with another approach on this if wanted. The XMLDiff elements represents any differences found. A sample on input is shown here. Modeling the Message Flow Then the next step is to model the message flow of the Proxy Service. In the Request Pipeline create a stage node that loads the test case input data.      For this, specify a dynamic XQuery expression that evaluates at runtime to the name of a pre-registered XQuery resource. The expression is of course set by the input data from the test case.           Add a Run stage node. Assign the result of the XQuery, that is to be run, to a context variable. Define a mapping for each of the input variables added in previous stage.     Add a Compare stage. Like with the input data, load the expected output data. Do a compare using XMLDiff XQuery provided where the first argument is the loaded output test data, and the second argument the result from the Run stage. Any differences found is replaced back into the test case XMLDiff element. In case of any unexpected failure while processing, add an Error Handler to the Pipeline to capture the fault. To pass back the result add the following Insert action In the Response Pipeline. A sample on output is shown here.

    Read the article

  • OpenWorld Day 1

    - by Antony Reynolds
    A Day in the Life of an OpenWorld Attendee Part I Lots of people are blogging insightfully about OpenWorld so I thought I would provide some non-insightful remarks to buck the trend! With 50,000 attendees I didn’t expect to bump into too many people I knew, boy was I wrong!  I walked into the registration area and immediately was hailed by a couple of customers I had worked with a few months ago.  Moving to the employee registration area in a different hall I bumped into a colleague from the UK who was also registering.  As soon as I got my badge I bumped into a friend from Ireland!  So maybe OpenWorld isn’t so big after all! First port of call was Larrys Keynote.  As always Larry was provocative and thought provoking.  His key points were announcing the Oracle cloud offering in IaaS, PaaS and SaaS, pointing out that Fusion Apps are cloud enabled and finally announcing the 12c Database, making a big play of its new multi-tenancy features.  His contention was that multi-tenancy will simplify cloud development and provide better security by providing DB level isolation for applications and customers. Next day, Monday, was my first full day at OpenWorld.  The first session I attended was on monitoring of OSB, very interesting presentation on the benefits achieved by an Illinois area telco – US Cellular.  Great discussion of why they bought the SOA Management Packs and the benefits they are already seeing from their investment in terms of improved provisioning and time to market, as well as better performance insight and assistance with capacity planning. Craig Blitz provided a nice walkthrough of where Coherence has been and where it is going. Last night I attended the BOF on Managed File Transfer where Dave Berry replayed Oracles thoughts on providing dedicated Managed File Transfer as part of the 12c SOA release.  Dave laid out the perceived requirements and solicited feedback from the audience on what if anything was missing.  He also demoed an early version of the functionality that would simplify setting up MFT in SOA Suite and make tracking activity much easier. So much for Day 1.  I also ran into scores of old friends and colleagues and had a pleasant dinner with my friend from Ireland where I caught up on the latest news from Oracle UK.  Not bad for Day 1!

    Read the article

  • Free training at Northwest Cadence

    - by Martin Hinshelwood
    Even though I have only been at Northwest Cadence for a short time I have already done so much. What I really wanted to do was let you guys know about a bunch of FREE training that NWC offers. These sessions are at a fantastic time for the UK as 9am PST (Seattle time) is around 5pm GMT. Its a fantastic way to finish off your Fridays and with the lack of love for developers in the UK set to continue I would love some of you guys to get some from the US instead. There are really two offerings. The first is something called Coffee talks that take you through an hours worth of detail in a specific category. Coffee Talks These coffee talks have some superb topics and you can get excellent interaction with the presenter as they are kind of informal. Date Day Time Topic Register Here 01/04/11 Tuesday 8:30AM – 9:30AM PST Real World Business and Technical Benefits of ALM with TFS 2010 150656 01/28/11 Friday 9:00AM - 10:00AM PST The Full Testing Experience Professional Quality Assurance with Visual Studio 2010 152810 02/11/11 Friday 9:00AM - 10:00AM PST Visual Source Safe to Team Foundation Server 152844 02/25/11 Friday 2:00PM - 3:00PM PST The Full Testing Experience Professional Quality Assurance with Visual Studio 2010 152816 03/11/11 Friday 9:00AM - 10:00AM PST Lab Manager The Ultimate “No More No Repro” Tool 152809 03/25/11 Friday 9:00AM - 10:00AM PST The Full Testing Experience Professional Quality Assurance with Visual Studio 2010 152838 04/08/11 Friday 9:00AM - 10:00AM PST Visual Source Safe to Team Foundation Server 152846 04/22/11 Friday 9:00AM - 10:00AM PST The Full Testing Experience Professional Quality Assurance with Visual Studio 2010 152839 05/06/11 Friday 2:00PM - 3:00PM PST Real World Business and Technical Benefits of ALM with TFS 2010 150657 05/20/11 Friday 9:00AM - 10:00AM PST The Full Testing Experience Professional Quality Assurance with Visual Studio 2010 152842 06/03/11 Friday 9:00AM - 10:00AM PST Visual Source Safe to Team Foundation Server 152847 06/17/11 Friday 9:00AM - 10:00AM PST The Full Testing Experience Professional Quality Assurance with Visual Studio 2010 152843   ALM Training Engagement Program Microsoft has released a new program to bring free Visual Studio 2010 Training Sessions to select customers on Microsoft Visual Studio products and how Application Lifecycle Management (ALM) solutions can help drive greater business impact. For more details on this program, please see the process chart below.  To get started send an email to us; This training is paid for by Microsoft and you would need to commit to 4 sessions in order to get accepted into the program. So these have more hoops to jump through to get them, but the content is much more formal and centres around adoption.

    Read the article

  • How I might think like a hacker so that I can anticipate security vulnerabilities in .NET or Java before a hacker hands me my hat [closed]

    - by Matthew Patrick Cashatt
    Premise I make a living developing web-based applications for all form-factors (mobile, tablet, laptop, etc). I make heavy use of SOA, and send and receive most data as JSON objects. Although most of my work is completed on the .NET or Java stacks, I am also recently delving into Node.js. This new stack has got me thinking that I know reasonably well how to secure applications using known facilities of .NET and Java, but I am woefully ignorant when it comes to best practices or, more importantly, the driving motivation behind the best practices. You see, as I gain more prominent clientele, I need to be able to assure them that their applications are secure and, in order to do that, I feel that I should learn to think like a malevolent hacker. What motivates a malevolent hacker: What is their prime mover? What is it that they are most after? Ultimately, the answer is money or notoriety I am sure, but I think it would be good to understand the nuanced motivators that lead to those ends: credit card numbers, damning information, corporate espionage, shutting down a highly visible site, etc. As an extension of question #1--but more specific--what are the things most likely to be seeked out by a hacker in almost any application? Passwords? Financial info? Profile data that will gain them access to other applications a user has joined? Let me be clear here. This is not judgement for or against the aforementioned motivations because that is not the goal of this post. I simply want to know what motivates a hacker regardless of our individual judgement. What are some heuristics followed to accomplish hacker goals? Ultimately specific processes would be great to know; however, in order to think like a hacker, I would really value your comments on the broader heuristics followed. For example: "A hacker always looks first for the low-hanging fruit such as http spoofing" or "In the absence of a CAPTCHA or other deterrent, a hacker will likely run a cracking script against a login prompt and then go from there." Possibly, "A hacker will try and attack a site via Foo (browser) first as it is known for Bar vulnerability. What are the most common hacks employed when following the common heuristics? Specifics here. Http spoofing, password cracking, SQL injection, etc. Disclaimer I am not a hacker, nor am I judging hackers (Heck--I even respect their ingenuity). I simply want to learn how I might think like a hacker so that I may begin to anticipate vulnerabilities before .NET or Java hands me a way to defend against them after the fact.

    Read the article

  • What do you do when you encounter an idiotic interview question?

    - by Senthil
    I was interviewing with a "too proud of my java skills"-looking person. He asked me "What is your knowledge on Java IO classes.. say.. hash maps?" He asked me to write a piece of java code on paper - instantiate a class and call one of the instance's methods. When I was done, he said my program wouldn't run. After 5 minutes of serious thinking, I gave up and asked why. He said I didn't write a main function so it wouldn't run. ON PAPER. [I am too furious to continue with the stupidity...] Believe me it wasn't trick questions or a psychic or anger management evaluation thing. I can tell from his face, he was proud of these questions. That "developer" was supposed to "judge" the candidates. I can think of several things: Hit him with a chair (which I so desperately wanted to) and walk out. Simply walk out. Ridicule him saying he didn't make sense. Politely let him know that he didn't make sense and go on to try and answer the questions. Don't tell him anything, but simply go on to try and answer the questions. So far, I have tried just 4 and 5. It hasn't helped. Unfortunately many candidates seem to do the same and remain polite but this lets these kind of "developers" just keep ascending up the corporate ladder, gradually getting the capacity to pi** off more and more people. How do you handle these interviewers without bursting your veins? What is the proper way to handle this, yet maintain your reputation if other potential employers were to ever get to know what happened here? Is there anything you can do or should you even try to fix this? P.S. Let me admit that my anger has been amplified many times by the facts: He was smiling like you wouldn't believe. I got so many (20 or so) calls from that company the day before, asking me to come to the interview, that I couldn't do any work that day. I wasted a paid day off.

    Read the article

  • RealTek RTL8188CE WiFi adapter doesn't connect reliably

    - by ken.ganong
    I recently bought a new system76 laptop which came pre-installed with Ubuntu 11.10. I've been having trouble with my wireless connectivity. It seems that my connection with my wireless network keeps going in and out. It is not my network--I have seen the same problem on multiple WiFi networks and at different distances and reported link qualities. OS version: Ubuntu 11.10 oneiric kernel version: 3.0.0-14-generic lspci: lspci -nnk | grep -iA2 net 04:00.0 Network controller [0280]: Realtek Semiconductor Co., Ltd. RTL8188CE 802.11b/g/n WiFi Adapter [10ec:8176] (rev 01) Subsystem: Realtek Semiconductor Co., Ltd. Device [10ec:9196] Kernel driver in use: rtl8192ce -- 05:00.0 Ethernet controller [0200]: JMicron Technology Corp. JMC250 PCI Express Gigabit Ethernet Controller [197b:0250] (rev 05) Subsystem: CLEVO/KAPOK Computer Device [1558:2500] Kernel driver in use: jme iwconfig: iwconfig lo no wireless extensions. eth0 no wireless extensions. wlan0 IEEE 802.11bgn ESSID:"peppermintpatty" Mode:Managed Frequency:2.462 GHz Access Point: 98:FC:11:6C:E0:22 Bit Rate=72.2 Mb/s Tx-Power=20 dBm Retry long limit:7 RTS thr=2347 B Fragment thr:off Power Management:off Link Quality=49/70 Signal level=-61 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:1103 Missed beacon:0 lshw: sudo lshw -class network *-network description: Wireless interface product: RTL8188CE 802.11b/g/n WiFi Adapter vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:04:00.0 logical name: wlan0 version: 01 serial: 00:1c:7b:a1:95:04 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=rtl8192ce driverversion=3.0.0-14-generic firmware=N/A ip=192.168.1.106 latency=0 link=yes multicast=yes wireless=IEEE 802.11bgn resources: irq:18 ioport:e000(size=256) memory:f7d00000-f7d03fff *-network description: Ethernet interface product: JMC250 PCI Express Gigabit Ethernet Controller vendor: JMicron Technology Corp. physical id: 0 bus info: pci@0000:05:00.0 logical name: eth0 version: 05 serial: 00:90:f5:c0:42:b3 size: 10Mbit/s capacity: 1Gbit/s width: 32 bits clock: 33MHz capabilities: pm pciexpress msix msi bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=jme driverversion=1.0.8 duplex=half latency=0 link=no multicast=yes port=MII speed=10Mbit/s resources: irq:56 memory:f7c20000-f7c23fff ioport:d100(size=128) ioport:d000(size=256) memory:f7c10000-f7c1ffff memory:f7c00000-f7c0ffff Any help would be appreciated. The last time I've dealt with wireless issues, the most given solution was NDIS wrapper and I seem sorely out-of-date.

    Read the article

  • What's up with LDoms: Part 9 - Direct IO

    - by Stefan Hinker
    In the last article of this series, we discussed the most general of all physical IO options available for LDoms, root domains.  Now, let's have a short look at the next level of granularity: Virtualizing individual PCIe slots.  In the LDoms terminology, this feature is called "Direct IO" or DIO.  It is very similar to root domains, but instead of reassigning ownership of a complete root complex, it only moves a single PCIe slot or endpoint device to a different domain.  Let's look again at hardware available to mars in the original configuration: root@sun:~# ldm ls-io NAME TYPE BUS DOMAIN STATUS ---- ---- --- ------ ------ pci_0 BUS pci_0 primary pci_1 BUS pci_1 primary pci_2 BUS pci_2 primary pci_3 BUS pci_3 primary /SYS/MB/PCIE1 PCIE pci_0 primary EMP /SYS/MB/SASHBA0 PCIE pci_0 primary OCC /SYS/MB/NET0 PCIE pci_0 primary OCC /SYS/MB/PCIE5 PCIE pci_1 primary EMP /SYS/MB/PCIE6 PCIE pci_1 primary EMP /SYS/MB/PCIE7 PCIE pci_1 primary EMP /SYS/MB/PCIE2 PCIE pci_2 primary EMP /SYS/MB/PCIE3 PCIE pci_2 primary OCC /SYS/MB/PCIE4 PCIE pci_2 primary EMP /SYS/MB/PCIE8 PCIE pci_3 primary EMP /SYS/MB/SASHBA1 PCIE pci_3 primary OCC /SYS/MB/NET2 PCIE pci_3 primary OCC /SYS/MB/NET0/IOVNET.PF0 PF pci_0 primary /SYS/MB/NET0/IOVNET.PF1 PF pci_0 primary /SYS/MB/NET2/IOVNET.PF0 PF pci_3 primary /SYS/MB/NET2/IOVNET.PF1 PF pci_3 primary All of the "PCIE" type devices are available for SDIO, with a few limitations.  If the device is a slot, the card in that slot must support the DIO feature.  The documentation lists all such cards.  Moving a slot to a different domain works just like moving a PCI root complex.  Again, this is not a dynamic process and includes reboots of the affected domains.  The resulting configuration is nicely shown in a diagram in the Admin Guide: There are several important things to note and consider here: The domain receiving the slot/endpoint device turns into an IO domain in LDoms terminology, because it now owns some physical IO hardware. Solaris will create nodes for this hardware under /devices.  This includes entries for the virtual PCI root complex (pci_0 in the diagram) and anything between it and the actual endpoint device.  It is very important to understand that all of this PCIe infrastructure is virtual only!  Only the actual endpoint devices are true physical hardware. There is an implicit dependency between the guest owning the endpoint device and the root domain owning the real PCIe infrastructure: Only if the root domain is up and running, will the guest domain have access to the endpoint device. The root domain is still responsible for resetting and configuring the PCIe infrastructure (root complex, PCIe level configurations, error handling etc.) because it owns this part of the physical infrastructure. This also means that if the root domain needs to reset the PCIe root complex for any reason (typically a reboot of the root domain) it will reset and thus disrupt the operation of the endpoint device owned by the guest domain.  The result in the guest is not predictable.  I recommend to configure the resulting behaviour of the guest using domain dependencies as described in the Admin Guide in Chapter "Configuring Domain Dependencies". Please consult the Admin Guide in Section "Creating an I/O Domain by Assigning PCIe Endpoint Devices" for all the details! As you can see, there are several restrictions for this feature.  It was introduced in LDoms 2.0, mainly to allow the configuration of guest domains that need access to tape devices.  Today, with the higher number of PCIe root complexes and the availability of SR-IOV, the need to use this feature is declining.  I personally do not recommend to use it, mainly because of the drawbacks of the depencies on the root domain and because it can be replaced with SR-IOV (although then with similar limitations). This was a rather short entry, more for completeness.  I believe that DIO can usually be replaced by SR-IOV, which is much more flexible.  I will cover SR-IOV in the next section of this blog series.

    Read the article

  • Logging connection strings

    If you some of the dynamic features of SSIS such as package configurations or property expressions then sometimes trying to work out were your connections are pointing can be a bit confusing. You will work out in the end but it can be useful to explicitly log this information so that when things go wrong you can just review the logs. You may wish to develop this idea further and encapsulate such logging into a custom task, but for now lets keep it simple and use the Script Task. The Script Task code below will raise an Information event showing the name and connection string for a connection. Imports System Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() Dim fireAgain As Boolean ' Get the connection string, we need to know the name of the connection Dim connectionName As String = "My OLE-DB Connection" Dim connectionString As String = Dts.Connections(connectionName).ConnectionString ' Format the message and log it via an information event Dim message As String = String.Format("Connection ""{0}"" has a connection string of ""{1}"".", _ connectionName, connectionString) Dts.Events.FireInformation(0, "Information", message, Nothing, 0, fireAgain) Dts.TaskResult = Dts.Results.Success End Sub End Class Building on that example it is probably more flexible to log all connections in a package as shown in the next example. Imports System Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() Dim fireAgain As Boolean ' Loop through all connections in the package For Each connection As ConnectionManager In Dts.Connections ' Get the connection string and log it via an information event Dim message As String = String.Format("Connection ""{0}"" has a connection string of ""{1}"".", _ connection.Name, connection.ConnectionString) Dts.Events.FireInformation(0, "Information", message, Nothing, 0, fireAgain) Next Dts.TaskResult = Dts.Results.Success End Sub End Class By using the Information event it makes it readily available in the designer, for example the Visual Studio Output window (Ctrl+Alt+O) or the package designer Execution Results tab, and also allows you to readily control the logging by choosing which events to log in the normal way. Now before somebody starts commenting that this is a security risk, I would like to highlight good practice for building connection managers. Firstly the Password property, or any other similar sensitive property is always defined as write-only, and secondly the connection string property only uses the public properties to assemble the connection string value when requested. In other words the connection string will never contain the password. I have seen a couple of cases where this is not true, but that was just bad development by third-parties, you won’t find anything like that in the box from Microsoft.   Whilst writing this code it made me wish that there was a custom log entry that you could just turn on that did this for you, but alas connection managers do not even seem to support custom events. It did however remind me of a very useful event that is often overlooked and fits rather well alongside connection string logging, the Execute SQL Task’s custom ExecuteSQLExecutingQuery event. To quote the help reference Custom Messages for Logging - Provides information about the execution phases of the SQL statement. Log entries are written when the task acquires connection to the database, when the task starts to prepare the SQL statement, and after the execution of the SQL statement is completed. The log entry for the prepare phase includes the SQL statement that the task uses. It is the last part that is so useful, how often have you used an expression to derive a SQL statement and you want to log that to make sure the correct SQL is being returned? You need to turn it one, by default no custom log events are captured, but I’ll refer you to a walkthrough on setting up the logging for ExecuteSQLExecutingQuery by Jamie.

    Read the article

  • To Serve Man?

    - by Dave Convery
    Since the announcement of Windows 8 and its 'Metro' interface, the .NET community has wondered if the skills they've spent so long developing might be swept aside,in favour of HTML5 and JavaScript. Mercifully, that only seems to be true of SilverLight (as Simon Cooper points out), but it did leave me thinking how easy it is to impose a technology upon people without directly serving their needs. Case in point: QR codes. Once, probably, benign in purpose, they seem to have become a marketer's tool for determining when someone has engaged with an advert in the real world, with the same certainty as is possible online. Nobody really wants to use QR codes - it's far too much hassle. But advertisers want that data - they want to know that someone actually read their billboard / poster / cereal box, and so this flawed technology is suddenly everywhere, providing little to no value to the people who are actually meant to use it. What about 3D cinema? Profits from the film industry have been steadily increasing throughout the period that digital piracy and mass sharing has been possible, yet the industry cinema chains have forced 3D films upon a broadly uninterested audience, as a way of providing more purpose to going to a cinema, rather than watching it at home. Despite advances in digital projection, 3D cinema is scarcely more immersive to us than were William Castle's hoary old tricks of skeletons on wires and buzzing chairs were to our grandparents. iTunes - originally just a piece of software that catalogued and ripped music for you, but which is now multi-purpose bloatware; a massive, system-hogging behemoth. If it was being built for the people that used it, it would have been split into three or more separate pieces of software long ago. But as bloatware, it serves Apple primarily rather than us, stuffed with Music, Video, Various stores and phone / iPad management all bolted into one. Why? It's because, that way, you're more likely to bump into something you want to buy. You can't even buy a new laptop without finding that a significant chunk of your hard drive has been sold to 'select partners' - advertisers, suppliers of virus-busting software, and endless bloatware-flogging pop-ups that make using a new laptop without reformatting the hard drive like stepping back in time. The product you want is not the one you paid for. This is without even looking at services like Facebook and Klout, who provide a notional service with the intention of slurping up as much data about you as possible (in Klout's case, whether you create an account with them or not). What technologies do you find annoying or intrusive, and who benefits from keeping them around?

    Read the article

  • ADF Code Guidelines

    - by Chris Muir
    During Oracle Open World 2012 the ADF Product Management team announced a new OTN website, the ADF Architecture Square.  While OOW represents a great opportunity to let customers know about new and exciting developments, the problem with making announcements during OOW however is customers are bombarded with so many messages that it's easy to miss something important. So in this blog post I'd like to highlight as part of the ADF Architecture Square website, one of the initial core offerings is a new document entitled ADF Code Guidelines. Now the title of this document should hopefully make it obvious what the document contains, but what's the purpose of the document, why did Oracle create it? Personally having worked as an ADF consultant before joining Oracle, one thing I noted amongst ADF customers who had successfully deployed production systems, that they all approached software development in a professional and engineered way, and all of these customers had their own guideline documents on ADF best practices, conventions and recommendations.  These documents designed to be consumed by their own staff to ensure ADF applications were "built right", typically sourced their guidelines from their team's own expert learnings, and the huge amount of ADF technical collateral that is publicly available.  Maybe from manuals and whitepapers, presentations and blog posts, some written by Oracle and some written by independent sources. Now this is all good and well for the teams that have gone through this effort, gathering all the information and putting it into structured documents, kudos to them.  But for new customers who want to break into the ADF space, who have project pressures to deliver ADF solutions without necessarily working on assembling best practices, creating such a document is understandably (regrettably?) a low priority.  So in recognising this hurdle, at Oracle we've devised the ADF Code Guidelines.  This document sets out ADF code guidelines, practices and conventions for applications built using ADF Business Components and ADF Faces Rich Client (release 11g and greater).  The guidelines are summarized from a number of Oracle documents and other 3rd party collateral, with the goal of giving developers and development teams a short circuit on producing their own best practices collateral. The document is not a final production, but a living document that will be extended to cover new information as discovered or as the ADF framework changes. Readers are encouraged to discuss the guidelines on the ADF EMG and provide constructive feedback to me (Chris Muir) via the ADF EMG Issue Tracker. We hope you'll find the ADF Code Guidelines useful and look forward to providing updates in the near future. Image courtesy of paytai / FreeDigitalPhotos.net

    Read the article

  • Announcing the New Virtual Briefing Center

    - by Theresa Hickman
    Do you want to hear about real-world customer success stories? Or listen to Oracle Application leaders discuss the value in the latest releases of Oracle Application products? Do you want one place to download up-to-date content, including white papers, podcasts, webcasts and presentations? Did you miss the Virtual Trade Show at the beginning of 2011? If you answered yes to any of these questions, then the Virtual Briefing Center is the place to get up-to-date Oracle product information for Oracle E-Business Suite, PeopleSoft, JD Edwards, Fusion, Siebel and Hyperion across multiple product areas from financials, procurement, supply chain, CRM, Performance Management, and more. Every month we will have "Monthly Spotlights" to showcase new content. The following lists the upcoming live webcasts in July 2011: Weds. July 6, 2011 at 9:00 a.m. PST/12:00 p.m. EST: Hear about Amway’s upgrade to Oracle E-Business Suite 12.1 and how they stabilized financial modules, especially the month-end close processes. Thurs. July 14, 2011 at 9:00 a.m. PST/12:00 p.m. EST: Hear West Corporation share their PeopleSoft 9.1 upgrade, resulting in improved self-service, more robust reporting capabilities and new workflow and processes. Thurs. July 21, 2011 at 9:00 a.m. PST/12:00 p.m. EST: Learn how MFlex improved their operations, saved manpower and reduced time to close with their upgrade to JD Edwards EnterpriseOne 9.0. Thurs. July 28, 2011 at 9:00 a.m. PST/12:00 p.m. EST: IEEE discusses their upgrade to Siebel 8.1 using open web service architecture for faster SOA enablement allowing them to scale their membership capacity by 250%. If you cannot attend any of the above live events, that's OK because each of the webcasts in this series will be recorded and available on demand. And for you Financials folks who may have missed the webcasts from the Virtual Trade Show earlier this year, you can view them on demand by Visiting the Resource Library: Planning Your Successful Upgrade to Oracle E-Business Suite Financials 12.1. In this session, Bryant and Stratton College talk about their upgrade. Planning Your Successful Upgrade to PeopleSoft Financials 9.1. In this session, the University of Central Florida share their upgrade story. Fusion Financials: The New Standard for Finance. In this session, Terrance Wampler, the VP of Financial Application Strategy discusses the business value of Oracle's next generation financial applications and how customers can take advantage of Fusion Financials alongside their existing investments. What are you waiting for? Register now!

    Read the article

  • schedule compliance and keeping technical supports and resolving issues

    - by imays
    I am an entrepreneur of a small software developer company. The flagship product is developed by myself and my company grew up to 14 people. One of pride is that we've never have to be invested or loaned. The core development team is 5 people. 3 are seniors and 2 are juniors. After the first release, we've received many issues from our customers. Most of them are bug issues, customization needs, usage questions and upgrade requests. The issues from customers are incoming many times everyday, so it takes little time or much time of our developers. Because of our product is a software development kit(SDK) so most of questions can be answered only from our developers. And, for resolving bug issues, developers must be involved. Estimating time to resolve bug is hard. I fully understand it. However, our developers insist they cannot set the any due date of each project because they are busy doing technical supports and bug fixes by issues from customers everyday. Of course, they never do overwork. I suggested them an idea to divide the team into two parts: one for focusing on development by milestones, other for doing technical supports and bug fixes without setting due days. Then we could announce release plan officially. After the finish of release, two parts exchange the role for next milestone. However, they say they "NO, because it is impossible to share knowledge and design document fully." They still say they cannot set the release date and they request me to alter the due date flexibly. They does not fix the due date of each milestone. Fortunately, our company is not loaned and invested so we are not chocked. But I think it is bad idea to keep this situation. I know the story of ant and grasshopper. Our customers are tired of waiting forever of our release date. Companies consume limited time and money. If flexible due date without limit could be acceptable, could they accept flexible salary day? What is the root cause of our problem? All that I want is to fix and achieve precisely due date of each milestone without losing frequent technical supports. I think there must be solution for this situation. Please answer me. Thanks in advance. PS. Our tools and ways of project management are Trello, Mantis-like issue tracker, shared calendar software and scrum(collected cards into series of 'small and high completeness' projects).

    Read the article

  • Oracle OpenWorld 2012: Oracle Developer Cloud, ADF-Essentials, ADF Mobile and ME!

    - by Dana Singleterry
    This year at OOW, like those from the past, will certainly be unforgettable. Lots of new announcements which I can't mention here and may not event know about are sure to surprise. I'll keep this short and sweet. For every session ADF, ADF Mobile, Oracle Developer Cloud, Integration with SOA Suite, etc... take a look at the ADF Focus Document listing all the sessions ordered by day providing time and location. For Mobile specifically check out the Mobile Focus Document. OOW 2012 actually kicks off on Sunday with Moscone North demogrounds hosting Cloud. There's also the ADF EMG User Day where you can pick up many technical tips & tricks from ADF Developers / ACE Directors from around the world. A session you shouldn't miss and a great starting point for the week if you miss Sunday's ADF EMG User Day for all of you TechoFiles is Chris Tonas's keynote for developers - Monday 10:45 am at Salon 8 in the Marriott - The Future of Development for Oracle Fusion - From Desktop to Mobile to Cloud. Then peruse the ADF Focus Document to fill out your day with the many sessions and labs on ADF. Don't forge that Wednesday afternoon (4:30 - 5:30) offers an ADF Meetup which is an excellent opportunity to catch up with the Shakers and Makers of ADF from Product Managent, to customers, to top developers leveraging the ADF technology, to ACE Directors themselves. Not to mention free beer is provided to help you wind down from a day of Techno Overload. Now for my schedule and I do hope to see some of you at one of these. OOW 2012 Schedule 10/1 Monday 9:30am – 12:00pm: JDev DemoGrounds 3:15pm – 4:15pm: Intro to Oracle ADF HOL; Marriott Marquis – Salon ¾ 4:00pm – 6:00pm: Cloud DemoGrounds 10/2 Tuesday 9:45am – 12:00pm: JDev DemoGrounds 2:00pm -4:00pm: Cloud DemoGrounds 7:30 – 9:30: Team Dinner @ Donato Enoteca; Redwood City 10/3 Wednesday 10:15pm – 11:15pm: Intro to Oracle ADF HOL; Marriott Marquis – Salon 3/4 1:15pm – 2:15pm: Oracle ADF – Lessons Learned in Real-World Implementations; Moscone South – Room 309This session takes the form of a panel that consists of three customer: Herbalife, Etiya, & Hawaii State Department of Education. During the first part of this session each customer will provide a high-level overview of their application. Following this overview I'll ask questions of the customers specific to their implementations and lessons learned through their development life-cycle. Here's the session abstract: CON3535 - Oracle ADF: Lessons Learned in Real-World Implementations This session profiles and interviews customers that have been successful in delivering compelling applications based on Oracle’s Application Development Framework (Oracle ADF). The session provides an overview of Oracle ADF, and then three customers explain the business drivers for their respective applications and solutions and if possible, provide a demonstration of the applications. Interactive questions posed to the customers after their overview will make for an exciting and dynamic format in which the customers will provide insight into real-world lessons learned in developing with Oracle ADF. 3:30pm – 4:30 pm: Developing Applications for Mobile iOS and Android Devices with Oracle ADF Mobile; Marriott Marquis – Salon 10A 4:30pm – 6:00pm: Meet and Greet ADF Developers/Customers; OTN Lounge 10/4 Thursday   11:15pm – 12:15pm: Intro to Oracle ADF HOL; Marriott Marquis – Salon 3/4 I'm sure our paths will cross at some point during the week and I look forward to seeing you at one of the many events. Enjoy OOW 2012!

    Read the article

  • Congratulations to the 2012 Oracle Spatial Award Winners!

    - by Mandy Ho
    I just returned from the 2012 Location Intelligence and Oracle Spatial User conference in Washington, DC, held by Directions Magazine. It was a great conference with presentations from across the country and globe, networking with Oracle Spatial users and meeting new customers and partners. As part of the yearly event, Oracle recognizes special customers and partners for their contributions to advancing mainstream solutions using geospatial technology. This was the 8th year that Oracle has recognized innovative, industry leaders.   The awards were given in three categories: Education/Research, Innovator and Partnership. Here's a little on each of the award winners. Education and Research Award Winner: Technical University of Berlin The Institute for Geodesy and Geoinformation Science of the Technical University of Berlin (TU Berlin) was selected for its leading research work in mapping of urban and regional space onto virtual 3D-city and landscape models, and use of Oracle Spatial, including 3D Vector and Georaster type support, as the data management platform. Innovator Award Winner:  Istanbul Metropolitan Municipality Istanbul is the 3rd largest metropolitan area in Europe. One of their greatest challenges is organizing efficient public transportation for citizens and visitors. There are 15 types of transportations organized by 8 different agencies. To solve this problem, the Directorate of GIS of Istanbul Metropolitan Municipality has created a multi-model itinerary system to help citizens in their decision process for using public transport or their private cars. They choose to use Oracle Spatial Network Model as the solution in our system together with Java and SOAP web services.  Partnership Award Winners: CSoft Group and OSCARS. The Partnership award is given to the ISV or integrator who have demonstrated outstanding achievements in partnering with Oracle on the development side, in taking solutions to market.  CSoft Group- the largest Russion integrator and consultancy provider in CAD and GIS. CSoft was selected by the Oracle Spatial product development organization for the key role in delivering geospatial solutions based on Oracle Database and Fusion Middleware to the Russian market. OSCARS - Provides consulting/training in France, Belgium and Luxembourg. With only 3 full time staff, they have achieved significant success with leading edge customer implementations leveraging the latest Oracle Spatial/MapViewer technologies, and delivering training throughout Europe.  Finally, we also awarded two Special Recognition awards for two partners that helped contribute to the Oracle Partner Network Spatial Specialization. These two partners provided insight and technical expertise from a partner perspective to help launch the new certification program for Oracle Spatial Technologies. Award Winners: ThinkHuddle and OSCARS  For more pictures on the conference and the awards, visit our facebook page: http://www.facebook.com/OracleDatabase

    Read the article

  • Advantages and disadvantages of building a single page web application

    - by ryanzec
    I'm nearing the end of a prototyping/proof of concept phase for a side project I'm working on, and trying to decide on some larger scale application design decisions. The app is a project management system tailored more towards the agile development process. One of the decisions I need to make is whether or not to go with a traditional multi-page application or a single page application. Currently my prototype is a traditional multi-page setup, however I have been looking at backbone.js to clean up and apply some structure to my Javascript (jQuery) code. It seems like while backbone.js can be used in multi-page applications, it shines more with single page applications. I am trying to come up with a list of advantages and disadvantages of using a single page application design approach. So far I have: Advantages All data has to be available via some sort of API - this is a big advantage for my use case as I want to have an API to my application anyway. Right now about 60-70% of my calls to get/update data are done through a REST API. Doing a single page application will allow me to better test my REST API since the application itself will use it. It also means that as the application grows, the API itself will grow since that is what the application uses; no need to maintain the API as an add-on to the application. More responsive application - since all data loaded after the initial page is kept to a minimum and transmitted in a compact format (like JSON), data requests should generally be faster, and the server will do slightly less processing. Disadvantages Duplication of code - for example, model code. I am going to have to create models both on the server side (PHP in this case) and the client side in Javascript. Business logic in Javascript - I can't give any concrete examples on why this would be bad but it just doesn't feel right to me having business logic in Javascript that anyone can read. Javascript memory leaks - since the page never reloads, Javascript memory leaks can happen, and I would not even know where to begin to debug them. There are also other things that are kind of double edged swords. For example, with single page applications, the data processed for each request can be a lot less since the application will be asking for the minimum data it needs for the particular request, however it also means that there could be a lot more small request to the server. I'm not sure if that is a good or bad thing. What are some of the advantages and disadvantages of single page web applications that I should keep in mind when deciding which way I should go for my project?

    Read the article

  • Retreiving upcoming calendar events from a Google Calendar

    - by brian_ritchie
    Google has a great cloud-based calendar service that is part of their Gmail product.  Besides using it as a personal calendar, you can use it to store events for display on your web site.  The calendar is accessible through Google's GData API for which they provide a C# SDK. Here's some code to retrieve the upcoming entries from the calendar:  .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: public class CalendarEvent 2: { 3: public string Title { get; set; } 4: public DateTime StartTime { get; set; } 5: } 6:   7: public class CalendarHelper 8: { 9: public static CalendarEvent[] GetUpcomingCalendarEvents 10: (int numberofEvents) 11: { 12: CalendarService service = new CalendarService("youraccount"); 13: EventQuery query = new EventQuery(); 14: query.Uri = new Uri( 15: "http://www.google.com/calendar/feeds/userid/public/full"); 16: query.FutureEvents = true; 17: query.SingleEvents = true; 18: query.SortOrder = CalendarSortOrder.ascending; 19: query.NumberToRetrieve = numberofEvents; 20: query.ExtraParameters = "orderby=starttime"; 21: var events = service.Query(query); 22: return (from e in events.Entries select new CalendarEvent() 23: { StartTime=(e as EventEntry).Times[0].StartTime, 24: Title = e.Title.Text }).ToArray(); 25: } 26: } There are a few special "tricks" to make this work: "SingleEvents" flag will flatten out reoccurring events "FutureEvents", "SortOrder", and the "orderby" parameters will get the upcoming events. "NumberToRetrieve" will limit the amount coming back  I then using Linq to Objects to put the results into my own DTO for use by my model.  It is always a good idea to place data into your own DTO for use within your MVC model.  This protects the rest of your code from changes to the underlying calendar source or API.

    Read the article

< Previous Page | 946 947 948 949 950 951 952 953 954 955 956 957  | Next Page >