Search Results

Search found 12104 results on 485 pages for 'auth basic'.

Page 420/485 | < Previous Page | 416 417 418 419 420 421 422 423 424 425 426 427  | Next Page >

  • Should I break contract early?

    - by cbang
    About 7 months ago I made the switch from a 5 year permie role (as a support developer in C#) to a contract role. I did this because I was stagnating in my old role. The extra cash contracting is really helping too. Unfortunately my team leader has taken a dislike to me from day 1. He regularly tells me I went out contracting too early, and frequently remarks that people in their 20's have no idea what they are talking about (I am 29). I was recently given the task of configuring our reports via our in house reporting library. It works off of a database driven criteria base, with controls being loaded as needed. The configs can get fairly complex, with controls having various levels of dependency on each other. I had a short time frame to get 50 reports working, and I was told to just get the basic configuration done, after which they will be handed over to the reporting team for fine tuning, then the test team. Our updated system was deployed 2 weeks ago, and it turned out that about 15 reports had issues causing incorrect data to be returned. Upon investigation I discovered that the reporting team hadn't even looked at them, and the test team hadn't bothered to test the reports. In spite of this, my team leader has told me that it is 100% my fault. As a result, our help desk got hit hard. I worked back until 2am that night to fix the highest priority issues (on my wedding anniversary!). The next day I arrive at work at 7:45 am to continue with the fixes. I got no thanks, but keep getting repeatedly told by my manager that "I fucked up" and "this is all my fault". I told my team leader I would spend part of my weekend working to fix the remaining issues. His response was "so you fucking should! you fucked it all up!" in front of the rest of the team. I responded "No worries." and left. I spent a decent chunk of my weekend working on it. Within 2 business days of finding out about the issues, I had all the medium and high priority issues fixed. The only comments my team leader has made to me in the last 2 weeks is to tell me how I have caused a big mess, and to tell me it was all my fault. I get this multiple times a day. If I make any jokes to anyone else in the team, I get told not to be a smartass... even though the rest of the team jokes throughout the day. Apart from that, all I get is angry looks any time I am anywhere near the guy. I don't give any response other than "alright" or silence when he starts giving me a hard time. Today we found out that the pilot release for the next stage has been pushed back. My team leader has said this was caused by me (but the higher ups said no such thing). He also said I have "no understanding of the ramifications of my actions". My question is, should I break contract (I am contracted until June 30) and find another role? No one else in my team will speak up in my favour, as they are contractors too and have no interest in rocking the boat. I could complain to my team leaders boss, but I can't see that helping, as I will still be stuck in the same team. As this is my first contract, I imagine getting the next one will be hard without a reference. I can't figure out if this guy is trying to get me fired up to provoke a confrontation (the guy loves conflict), or if he is just venting anger, or what. Copping this blame day after day is really wearing me down and making me depressed... especially since I have a wife and kid to support).

    Read the article

  • Tuning Red Gate: #1 of Many

    - by Grant Fritchey
    Everyone runs into performance issues at some point. Same thing goes for Red Gate software. Some of our internal systems were running into some serious bottlenecks. It just so happens that we have this nice little SQL Server monitoring tool. What if I were to, oh, I don't know, use the monitoring tool to identify the bottlenecks, figure out the causes and then apply a fix (where possible) and then start the whole thing all over again? Just a crazy thought. OK, I was asked to. This is my first time looking through these servers, so here's how I'd go about using SQL Monitor to get a quick health check, sort of like checking the vitals on a patient. First time opening up our internal SQL Monitor instance and I was greeted with this: Oh my. Maybe I need to get our internal guys to read my blog. Anyway, I know that there are two servers where most of the load is. I'll drill down on the first. I'm selecting the server, not the instance, by clicking on the server name. That opens up the Global Overview page for the server. The information here much more applicable to the "oh my gosh, I have a problem now" type of monitoring. But, looking at this, I am seeing something immediately. There are four(4) drives on the system. The C:\ has an average read time of 16.9ms, more than double the others. Is that a problem? Not sure, but it's something I'll look at. It's write time is higher too. I'll keep drilling down, first, to the unclosed alerts on the server. Now things get interesting. SQL Monitor has a number of different types of alerts, some related to error states, others to service status, and then some related to performance. Guess what I'm seeing a bunch of right here: Long running queries and long job durations. If you check the dates, they're all recent, within the last 24 hours. If they had just been old, uncleared alerts, I wouldn't be that concerned. But with all these, all performance related, and all in the last 24 hours, yeah, I'm concerned. At this point, I could just start responding to the Alerts. If I click on one of the the Long-running query alerts, I'll get all kinds of cool data that can help me determine why the query ran long. But, I'm not in a reactive mode here yet. I'm still gathering data, trying to understand how the server works. I have the information that we're generating a lot of performance alerts, let's sock that away for the moment. Instead, I'm going to back up and look at the Global Overview for the SQL Instance. It shows all the databases on the server and their status. Then it shows a number of basic metrics about the SQL Server instance, again for that "what's happening now" view or things. Then, down at the bottom, there is the Top 10 expensive queries list: This is great stuff. And no, not because I can see the top queries for the last 5 minutes, but because I can adjust that out 3 days. Now I can see where some serious pain is occurring over the last few days. Databases have been blocked out to protect the guilty. That's it for the moment. I have enough knowledge of what's going on in the system that I can start to try to figure out why the system is running slowly. But, I want to look a little more at some historical data, to understand better how this server is behaving. More next time.

    Read the article

  • SQL analytical mash-ups deliver real-time WOW! for big data

    - by KLaker
    One of the overlooked capabilities of SQL as an analysis engine, because we all just take it for granted, is that you can mix and match analytical features to create some amazing mash-ups. As we move into the exciting world of big data these mash-ups can really deliver those "wow, I never knew that" moments. While Java is an incredibly flexible and powerful framework for managing big data there are some significant challenges in using Java and MapReduce to drive your analysis to create these "wow" discoveries. One of these "wow" moments was demonstrated at this year's OpenWorld during Andy Mendelsohn's general keynote session.  Here is the scenario - we are looking for fraudulent activities in our big data stream and in this case we identifying potentially fraudulent activities by looking for specific patterns. We using geospatial tagging of each transaction so we can create a real-time fraud-map for our business users. Where we start to move towards a "wow" moment is to extend this basic use of spatial and pattern matching, as shown in the above dashboard screen, to incorporate spatial analytics within the SQL pattern matching clause. This will allow us to compute the distance between transactions. Apologies for the quality of this screenshot….hopefully below you see where we have extended our SQL pattern matching clause to use location of each transaction and to calculate the distance between each transaction: This allows us to compare the time of the last transaction with the time of the current transaction and see if the distance between the two points is possible given the time frame. Obviously if I buy something in Florida from my favourite bike store (may be a new carbon saddle for my Trek) and then 5 minutes later the system sees my credit card details being used in Arizona there is high probability that this transaction in Arizona is actually fraudulent (I am fast on my Trek but not that fast!) and we can flag this up in real-time on our dashboard: In this post I have used the term "real-time" a couple of times and this is an important point and one of the key reasons why SQL really is the only language to use if you want to analyse  big data. One of the most important questions that comes up in every big data project is: how do we do analysis? Many enlightened customers are now realising that using Java-MapReduce to deliver analysis does not result in "wow" moments. These "wow" moments only come with SQL because it is offers a much richer environment, it is simpler to use and it is faster - which makes it possible to deliver real-time "Wow!". Below is a slide from Andy's session showing the results of a comparison of Java-MapReduce vs. SQL pattern matching to deliver our "wow" moment during our live demo.  You can watch our analytical mash-up "Wow" demo that compares the power of 12c SQL pattern matching + spatial analytics vs. Java-MapReduce  here: You can get more information about SQL Pattern Matching on our SQL Analytics home page on OTN, see here http://www.oracle.com/technetwork/database/bi-datawarehousing/sql-analytics-index-1984365.html.  You can get more information about our spatial analytics here: http://www.oracle.com/technetwork/database-options/spatialandgraph/overview/index.html If you would like to watch the full Database 12c OOW presentation see here: http://medianetwork.oracle.com/video/player/2686974264001

    Read the article

  • Is there an IDE that can simplify the process of creating a game matchmaking website?

    - by Scott
    Yes, I'm an old guy. And I'm well versed in "C" and have written several games which I have been selling on the web for a number of years. And now, I would like to adapt one of my games to be "online". Sounds simple. I'm sure I can use the thousands of lines of "C" code that I've already written. Right? So my initial investigation begins. First, I think I'll need a server program that lives on a dedicated server (or a VPS probably) that talks to a bunch of client applications that live on individual devices around the world. I can certainly handle that! (I think to myself). I'll break up my existing game into two pieces, a client piece that is just the game displays and buttons, and a server piece that does everything else. Piece of cake, right? But that means that the "server piece" must be executed on a remote machine somewhere and run 24/7. Can I do that? [apparently, that question is so basic, so uneducated, and so lame, that nobody has ever posed it before. Because hours of Googling does not yield an answer. Fine. I'll assume I can do that and move on.] I'll need a "game room", which to me means a website where you log in and then go to a lobby of some kind where you can setup your preferences, see if any of your friends are connected, and create or join games. Should be easy, but it's not. No way. Can I do all this with my local website builder? (which happens to be 90 Second Website Builder, a nice product, btw). It turns out, I can not. I can start with that, but must modify each page, so I can interact with my sql database. So I begin making each page a "PHP" page and dynamically modifying the HTML code with PHP code. I'm already starting to get a headache. Because the resulting web pages looked terrible, I began looking at using JQuery. I want to user a JQuery dialog on my website to display a list of friends and allow the user to select one to invite to the game. [google search for "how to populate a JQuery dialog from a sql database" yields nothing but more confusion.] Javascript? Java? HTML? XML? HTML5? PHP? JQuery? Flash? Sockets? Forms? CSS? Learning about each one of these, and how they interact with each other and/or depend on each other is too much for my feeble old brain. Can anyone simplify this process for me? Is there an IDE that will help me do all this without having to go back to college for a few years? Thanks, Scott

    Read the article

  • Mass Metadata Updates with Folders

    - by Kyle Hatlestad
    With the release of WebCenter Content PS5, a new folder architecture called 'Framework Folders' was introduced.  This is meant to replace the folder architecture of 'Folders_g'.  While the concepts of a folder structure and access to those folders through Desktop Integration Suite remain the same, the underlying architecture of the component has been completely rewritten.  One of the main goals of the new folders is to scale better at large volumes and remove the limitations of 1000 content items or sub-folders within a folder.  Along with the new architecture, it has a new look and a few additional features have been added.  One of those features are Query Folders.  These are folders that are populated simply by a query rather then literally putting items within the folders.  This is something that the Library has provided, but it always took an administrator to define them through the Web Layout Editor.  Now users can quickly define query folders anywhere within the standard folder hierarchy.   Within this new Framework Folders is the very handy ability to do metadata updates.  It's similar to the Propagate feature in Folders_g, but there are some key differences that make this very flexible and much more powerful. It's used within regular folders and Query Folders.  So the content you're updating doesn't all have to be in the same folder...or a folder at all.   The user decides what metadata to propagate.  In Folders_g, the system administrator controls which fields will be propagated using a single administration page.  In Framework Folders, the user decides at that time which fields they want to update. You set the value you want on the propagation screen.  In Folders_g, it used the metadata defined on the parent folder to propagate.  With Framework Folders, you supply the new metadata value when you select the fields you want to update.  It does not have to be defined on the parent folder. Because of these differences, I think the new propagate method is much more useful.  Instead of always having to rely on Archiver or a custom spreadsheet, you can quickly do mass metadata updates right within folders.   Here are the basic steps to perform propagation. First create a folder for the propagation.  You can use a regular folder, but a Query Folder will work as well. Go into the folder to get the results.   In the Edit menu, select 'Propagate'. Select the check-box next to the field to update and enter the new value  Click the Propagate button. Once complete, a dialog will appear showing it is complete What's also nice is that the process happens asynchronously in the background which means you can browse to other pages and do other things while it is still working.  You aren't stuck on the page waiting for it to complete.  In addition, you can add a configuration flag to the server to turn on a status indicator icon.  Set 'FldEnableInProcessIndicator=1' and it will show a working icon as its doing the propagation. There is a caveat when using the propagation on a Query Folder.   While a propagation on a regular folder will update all of the items within that folder, a Query Folder propagation will only update the first 50 items.  So you may need to run it multiple times depending on the size...and have the query exclude the items as they get updated. One extra note...Framework Folders is offered as the default folder architecture in the PS5 release of WebCenter Content.  But if you are using WebCenter Content integrated with another product that makes use of folders (WebCenter Portal/Spaces, Fusion Applications, Primavera, etc), you'll need to continue using Folders_g until they are updated to use the new folders.

    Read the article

  • OTN ArchBeat Top 10 for September 2012

    - by Bob Rhubart
    The results are in... Listed below are the Top 10 most popular items shared via the OTN ArchBeat Facebook Page for the month of September 2012. The Real Architects of Los Angeles - OTN Architect Day - Oct 25 No gossip. No drama. No hair pulling. Just a full day of technical sessions and peer interaction focused on using Oracle technologies in today's cloud and SOA architectures. The event is free, but seating is limited, so register now. Thursday October 25, 2012. 8:00 a.m. – 5:00 p.m. Sofitel Los Angeles, 8555 Beverly Boulevard, Los Angeles, CA 90048. Oracle Fusion Middleware Security: Attaching OWSM policies to JRF-based web services clients "OWSM (Oracle Web Services Manager) is Oracle's recommended method for securing SOAP web services," says Oracle Fusion Middleware A-Team member Andre Correa. "It provides agents that encapsulate the necessary logic to interact with the underlying software stack on both service and client sides. Such agents have their behavior driven by policies. OWSM ships with a bunch of policies that are adequate to most common real world scenarios." His detailed post shows how to make it happen. Oracle 11gR2 RAC on Software Defined Network (SDN) (OpenvSwitch, Floodlight, Beacon) | Gilbert Stan "The SDN [software defined network] idea is to separate the control plane and the data plane in networking and to virtualize networking the same way we have virtualized servers," explains Gil Standen. "This is an idea whose time has come because VMs and vmotion have created all kinds of problems with how to tell networking equipment that a VM has moved and to preserve connectivity to VPN end points, preserve IP, etc." H/T to Oracle ACE Director Tim Hall for the recommendation. Process Oracle OER Events using a simple Web Service | Bob Webster Bob Webster's post "provides an example of a simple web service that processes Oracle Enterprise Repository (OER) Events. The service receives events from OER and utilizes the OER REX API to implement simple OER automations for selected event types." Understanding Oracle BI 11g Security vs Legacy Oracle BI 10g | Christian Screen "After conducting a large amount of Oracle BI 10g to Oracle BI 11g upgrades and after writing the Oracle BI 11g book,"says Oracle ACE Christian Screen, "I still continually get asked one of the most basic questions regarding security in Oracle BI 11g; How does it compare to Oracle BI 10g? The trail of questions typically goes on to what are the differences? And, how do we leverage our current Oracle BI 10g security table schema in Oracle BI 11g?" OIM-OAM-OAAM integration using TAP – Request Flow you must understand!! | Atul Kumar Atul Kumar's post addresses "key points and request flow that you must understand" when integrating three Oracle Identity Management product Oracle Identity Management, Oracle Access Management, and Oracle Adaptive Access Manager. Adding a runtime LOV for a taskflow parameter in WebCenter | Yannick Ongena Oracle ACE Yannick Ongena illustrates how to customize the parameters tab for a taskflow in WebCenter. Tips on Migrating from AquaLogic .NET Accelerator to WebCenter WSRP Producer for .NET | Scott Nelson "It has been a very winding path and this blog entry is intended to share both the lessons learned and relevant approaches that led to those learnings," says Scott Nelson. "Like most journeys of discovery, it was not a direct path, and there are notes to let you know when it is practical to skip a section if you are in a hurry to get from here to there." 15 Lessons from 15 Years as a Software Architect | Ingo Rammer In this presentation from the GOTO Conference in Copenhagen, Ingo Rammer shares 15 tips regarding people, complexity and technology that he learned doing software architecture for 15 years. WebCenter Content (WCC) Trace Sections | ECM Architect ECM Architect Kevin Smith shares a detailed technical post covering WebCenter Content (WCC) Trace Sections. Thought for the Day "Eventually everything connects - people, ideas, objects. The quality of the connections is the key to quality per se." — Charles Eames (June 17, 1907 – August 21, 1978) Source: SoftwareQuotes.com

    Read the article

  • Developer Training – 6 Online Courses to Learn SQL Server, MySQL and Technology

    - by Pinal Dave
    Video courses are the next big thing and I am so happy that I have so far authored 6 different video courses with Pluralsight. Here is the list of the courses. I have listed all of my video courses over here. Note: If you click on the courses and it does not open, you need to login to Pluralsight with a valid username and password or sign up for a FREE trial. Please leave a comment with your favorite course in the comment section. Random 10 winners will get surprise gift via email. Bonus: If you list your favorite module from the course site. SQL Server Performance: Introduction to Query Tuning SQL Server performance tuning is an in-depth topic, and an art to master. A key component of overall application performance tuning is query tuning. Writing queries in an efficient manner, and making sure they execute in the most optimal way possible, is always a challenge. The basics revolve around the details of how SQL Server carries out query execution, so the optimizations explored in this course follow along the same lines. Click to View Course SQL Server Performance: Indexing Basics Indexes are the most crucial objects of the database. They are the first stop for any DBA and Developer when it is about performance tuning. There is a good side as well evil side of the indexes. To master the art of performance tuning one has to understand the fundamentals of the indexes and the best practices associated with the same. This course is for every DBA and Developer who deals with performance tuning and wants to use indexes to improve the performance of the server. Click to View Course SQL Server Questions and Answers This course is designed to help you better understand how to use SQL Server effectively. The course presents many of the common misconceptions about SQL Server, and then carefully debunks those misconceptions with clear explanations and short but compelling demos, showing you how SQL Server really works. This course is for anyone working with SQL Server databases who wants to improve her knowledge and understanding of this complex platform. Click to View Course MySQL Fundamentals MySQL is a popular choice of database for use in web applications, and is a central component of the widely used LAMP open source web application software stack. This course covers the fundamentals of MySQL, including how to install MySQL as well as written basic data retrieval and data modification queries. Click to View Course Building a Successful Blog Expressing yourself is the most common behavior of humans. Blogging has made easy to express yourself. Just like a letter or book has a structure and formula, blogging also has structure and formula. In this introductory course on blogging we will go over a few of the basics of blogging and show the way to get started with blogging immediately. If you already have a blog, this course will be even more relevant as this will discuss many of the common questions and issue you face in your blogging routine. Click to View Course Introduction to ColdFusion ColdFusion is rapid web application development platform. In this course you will learn the basics of how to use ColdFusion platform and rapidly develop web sites. The course begins with learning basics of ColdFusion Markup Language and moves to common development language practices. From there we move to frequent database operations and advanced concepts of Forms, Sessions and Cookies. The last module sums up all the concepts covered in the course with sample application. Click to View Course Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, T SQL, Technology

    Read the article

  • Sweet and Sour Source Control

    - by Tony Davis
    Most database developers don't use Source Control. A recent anonymous poll on SQL Server Central asked its readers "Which Version Control system do you currently use to store you database scripts?" The winner, with almost 30% of the vote was...none: "We don't use source control for database scripts". In second place with almost 28% of the vote was Microsoft's VSS. VSS? Given its reputation for being buggy, unstable and lacking most of the basic features required of a proper source control system, answering VSS is really just another way of saying "I don't use Source Control". At first glance, it's a surprising thought. You wonder how database developers can work in a team and find out what changed, when the system worked before but is now broken; to work out what happened to their changes that now seem to have vanished; to roll-back a mistake quickly so that the rest of the team have a functioning build; to find instantly whether a suspect change has been deployed to production. Unfortunately, the survey didn't ask about the scale of the database development, and correlate the two questions. If there is only one database developer within a schema, who has an automated approach to regular generation of build scripts, then the need for a formal source control system is questionable. After all, a database stores far more about its metadata than a traditional compiled application. However, what is meat for a small development is poison for a team-based development. Here, we need a form of Source Control that can reconcile simultaneous changes, store the history of changes, derive versions and builds and that can cope with forks and merges. The problem comes when one borrows a solution that was designed for conventional programming. A database is not thought of as a "file", but a vast, interdependent and intricate matrix of tables, indexes, constraints, triggers, enumerations, static data and so on, all subtly interconnected. It is an awkward fit. Subversion with its support for merges and forks, and the tolerance of different work practices, can be made to work well, if used carefully. It has a standards-based architecture that allows it to be used on all platforms such as Windows Mac, and Linux. In the words of Erland Sommerskog, developers should "just do it". What's in a database is akin to a "binary file", and the developer must work only from the file. You check out the file, edit it, and save it to disk to compile it. Dependencies are validated at this point and if you've broken anything (e.g. you renamed a column and broke all the objects that reference the column), you'll find out about it right away, and you'll be forced to fix it. Nevertheless, for many this is an alien way of working with SQL Server. Subversion is the powerhouse, not the GUI. It doesn't work seamlessly with your existing IDE, and that usually means SSMS. So the question then becomes more subtle. Would developers be less reluctant to use a fully-featured source (revision) control system for a team database development if they had a turn-key, reliable system that fitted in with their existing work-practices? I'd love to hear what you think. Cheers, Tony.

    Read the article

  • Get Your Enterprise Working With Oracle On Track Communication 1.0

    - by Josh Lannin
    The On Track Development team is very pleased to announce that today On Track is available for our customers to download and evaluate.  To learn more about what On Track does start with our whitepaper and datasheet.   If you are a developer, take a look at our documentation and samples posted to our OTN page. For this first blog post, I’ll be speaking to several notable points about our product. Graceful Escalation via Conversations: On Track addresses the “Collaboration Problem” through a single guiding principle – graceful escalation – within the construct of a Conversation. In On Track, collaboration is based on a context (called a “Conversation”) that gracefully escalates in form, structure, and content, as dictated by the particular needs of a given collaboration.  Within that context, On Track provides a rich set of tools to choose from.  These tools provide for communication, coordination, content management, organization, decision making, and analysis -- all essential aspects of collaboration, but not all of them are essential all of the time.  Every collaborative interaction will evolve differently.  Some will evolve to represent work spreading over the course of years and involving a large, distributed team, while others may involve few people and not evolve at all.  Regardless, all collaborative contexts are built from the same parts, utilize the same concepts, and start the same way.  The principle of graceful escalation is that you only use the tools and structure you need; so you only incur the complexity you need. Purposeful Collaboration: Through application integration, On Track Conversations bring enterprise application users the communication and collaboration capabilities required to complete business process.  By association with specific processes or business objects conversations extend the possible interactions and broaden participation to internal or external non-application users and provide a sophisticated interaction experience, all the while enhancing the data set within the owning application.  Purposeful collaboration not only needs to happen in the context of applications, it must support a full range of real-time and long-running interactions to provide the greatest value. Multi Client, Multi Modal: This On Track 1.0 product release includes the same day availability of  multiple clients, including iPhone and iPad applications which are now available on the Apple Store, a fully capable and accessible Outlook Add-In, along with our browser web client.  With each client we have sought to leverage the strengths of each unique device- our iPhone client supports picture and voice posts, the iPad is optimized for meeting room situations and document viewing, and our Outlook add-in allows you to take emails in context and bring them into On Track.  In addition to supporting a diverse array of clients, On Track provides a unified multi modal experience support starting with basic messages moving through to integrated documents with live annotations, snapshots, application sharing, and voice. Next Generation Web Architecture: We believe On Track will help move the bar higher for what users can expect from all web applications, most notably ones that involve real-time activity.  On Track is built from the ground up with an innovative, real-time architecture that leverages the extensive push capabilities of our server.  Whether you are receiving a new message, viewing where crowds of people are collaborating, or doing live annotation on a document with a set of people, that information comes to you immediately without refreshes or moving back and forth between pages.  We’ve leveraged this core architecture across the product experience and raised the user experience bar for this type of application.  As well these capabilities are based on open standards and protocols, and are fully extensible by anyone- enabling sophisticated integrations to be created with a wide variety of both legacy and next-generation applications. Agile Product Development: As a product team we operate using continuous feedback and modified agile development methodologies.  We have thousands of active internal Oracle users who have helped pilot our product for critical business functions, and the On Track product development team uses our product as our primary vehicle for all our collaboration.  Additionally we been working with early access customers who are adopting our technology and providing us valuable feedback - which our process has rapidly realized in improvements to our software.  On Track agility extends to our server as well, which is built to scale, and is very simple to install and configure. We are pleased to make this product announcement and encourage you to join us on Facebook or follow us on Twitter, as well as checking back here for the latest product information.

    Read the article

  • My Dog, Cross-Channel Shopping, and Fusion SCM

    - by Kathryn Perry
    A guest post by Mark Carson, Director, Oracle Fusion Supply Chain Management I was walking my dog Max in an open space behind my house. As we tromped through the tall weeds I remembered it is tick season and that I should get Max some protection. While he sniffed merrily in the tick infested brush, I started shopping in the middle of an open field on my phone. I thought it would be convenient to pick up the tick medicine from a pet store on the way home. Searching the pet store website I saw that they had the medicine, but there was no information on whether the store had any in stock and there were no options for shipping it to the store for pickup. I could return it, but not pick it up which seamed kind of odd. I really didn't feel like making calls to the local stores to find out if they had it. Since the product is popular, I tried one of the large 'everything' stores. Browsing its website I could see that it could be shipped to me, shipped to the store for free, and that the store nearest to me had it in stock. Needless to say, this store became a better option. This experience is a small example of why retailers, distributors, and manufactures have placed a high priority on enabling 'cross-channel commerce.' Shoppers like you and me expect to be able to search, compare, buy and return products on-line and over the phone using a variety of devices including PDAs, tablets and in-store kiosks. The pet store lost my business because its web channel had limited information about its stores. I have spoken with many customers and prospects about cross-channel commerce. They all realize the business implications and urgency behind cross-channel commerce but recognize there are challenges to enable it. New and existing applications must be integrated together globally through a consistent cross-channel business process. Integration is required between applications that provide the initial shopping experience and delivery applications associated with warehouses, stores, and partners. The enablement must be accomplished in a flexible way to react to fast-changing product portfolios and new acquisitions, while at the same time minimizing costs through reuse of existing systems. Meanwhile, the business must continue to grow and decision makers need to balance new capability with peak seasons. The challenges above are not unique to retail. Any customer in any industry who has multiple points for capturing orders and multiple points for fulfilling orders will face these challenges. With this in mind, we had a unique opportunity in Fusion SCM to re-think how to build a set of modular and flexible applications in the order management space that would make these challenges easier to conquer. The results are Fusion Distributed Order Orchestration and Global Order Promising. These applications can help companies, such as the pet store, enable true cross-channel commerce. The apps provide highly adaptable and flexible business processes to automate order orchestration across multiple cross-channel systems. They also show a global view of supply across warehouses, stores, and partners for real-time availability and more accurate order promising. Additional capability includes a standards-based integration framework for seamless execution and the ability to reuse existing systems for faster and lower cost implementations. OK, that was a mouthful of features and benefits. As Max waited to cross the street (he can do basic math too), I wondered if he could relate. He does not care about leash laws, pick-up courtesy, where he can/can't walk, what time of day it is, or even ticks. He does not care about how all these things could make walking complicated. He just wants to walk. Similarly, customers just want to shop and companies just want to make it easier to sell and deliver. You can learn more about Distributed Order Orchestration and Global Order Promising in cross-channel here.

    Read the article

  • What Counts For a DBA: Replaceable

    - by Louis Davidson
    Replaceable is what every employee in every company instinctively strives not to be. Yet, if you’re an irreplaceable DBA, meaning that the company couldn’t find someone else who could do what you do, then you’re not doing a great job. A good DBA is replaceable. I imagine some of you are already reaching for the lighter fluid, about to set the comments section ablaze, but before you destroy a perfectly good Commodore 64, read on… Everyone is replaceable, ultimately. Anyone, anywhere, in any job, could be sitting at their desk reading this, blissfully unaware that this is to be their last day at work. Morbidly, you could be about to take your terminal breath. Ideally, it will be because another company suddenly offered you a truck full of money to take a new job, forcing you to bid a regretful farewell to your current employer (with barely a “so long suckers!” left wafting in the air as you zip out of the office like the Wile E Coyote wearing two pairs of rocket skates). I’ve often wondered what it would be like to be present at the meeting where your former work colleagues discuss your potential replacement. It is perhaps only at this point, as they struggle with the question “What kind of person do we need to replace old Wile?” that you would know your true worth in their eyes. Of course, this presupposes you need replacing. I’ve known one or two people whose absence we adequately compensated with a small rock, to keep their old chair from rolling down a slight incline in the floor. On another occasion, we bought a noise-making machine that frequently attracted attention its way, with unpleasant sounds, but never contributed anything worthwhile. These things never actually happened, of course, but you take my point: don’t confuse replaceable with expendable. Likewise, if the term “trained seal” comes up, someone they can teach to follow basic instructions and push buttons in the right order, then the replacement discussion is going to be over quickly. What, however, if your colleagues decide they’ll need a super-specialist to replace you. That’s a good thing, right? Well, usually, in my experience, no it is not. It often indicates that no one really knows what you do, or how. A typical example is the “senior” DBA who built a system just before 16-bit computing became all the rage and then settled into a long career managing it. Such systems are often central to the company’s operations and the DBA very skilled at what they do, but almost impossible to replace, because the system hasn’t evolved, and runs on processes and routines that others no longer understand or recognize. The only thing you really want to hear, at your replacement discussion, is that they need someone skilled at the fundamentals and adaptable. This means that the person they need understands that their goal is to be an excellent DBA, not a specialist in whatever the-heck the company does. Someone who understands the new versions of SQL Server and can adapt the company’s systems to the way things work today, who uses industry standard methods that any other qualified DBA/programmer can understand. More importantly, this person rarely wants to get “pigeon-holed” and so documents and shares the specialized knowledge and responsibilities with their teammates. Being replaceable doesn’t mean being “dime a dozen”. The company might need four people to take your place due to the depth of your skills, but still, they could find those replacements and those replacements could step right in using techniques that any decent DBA should know. It is a tough question to contemplate, but take some time to think about the sort of person that your colleagues would seek to replace you. If you think they would go looking for a “super-specialist” then consider urgently how you can diversify and share your knowledge, and start documenting all the processes you know as if today were your last day, because who knows, it just might be.

    Read the article

  • Partner Webcast - Focus on Oracle Data Profiling and Data Quality 11g

    - by lukasz.romaszewski(at)oracle.com
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-ansi-language:RO;} Partner Webcast Focus on Oracle Data Profiling and Data Quality 11g February 24th, 12am  CET   Oracle offers an integrated suite Data Quality software architected to discover and correct today's data quality problems and establish a platform prepared for tomorrow's yet unknown data challenges. Oracle Data Profiling provides data investigation, discovery, and profiling in support of quality, migration, integration, stewardship, and governance initiatives. It includes a broad range of features that expand upon basic profiling, including automated monitoring, business-rule validation, and trend analysis. Oracle Data Quality for Data Integrator provides cleansing, standardization, matching, address validation, location enrichment, and linking functions for global customer data and operational business data. It ensures that data adheres to established standards that are adaptable to fit each organization's specific needs.  Both single - and double - byte data are processed in local languages to provide a unique and centralized view of customers, products and services.   During this in-person briefing, Data Integration Solution Specialists will be providing a technical overview and a walkthrough.   Agenda ·         Oracle Data Integration Strategy overview ·         A focus on Oracle Data Profiling and Oracle Data Quality for Data Integrator: o   Oracle Data Profiling o   Oracle Data Quality for Data Integrator o   Live demoo   Q&A Delivery Format  This FREE online LIVE eSeminar will be delivered over the Web and Conference Call. Registrations   received less than 24hours  prior to start time may not receive confirmation to attend. To register , click here. For any questions please contact [email protected]

    Read the article

  • Shadows shimmer when camera moves

    - by Chad Layton
    I've implemented shadow maps in my simple block engine as an exercise. I'm using one directional light and using the view volume to create the shadow matrices. I'm experiencing some problems with the shadows shimmering when the camera moves and I'd like to know if it's an issue with my implementation or just an issue with basic/naive shadow mapping itself. Here's a video: http://www.youtube.com/watch?v=vyprATt5BBg&feature=youtu.be Here's the code I use to create the shadow matrices. The commented out code is my original attempt to perfectly fit the view frustum. You can also see my attempt to try clamping movement to texels in the shadow map which didn't seem to make any difference. Then I tried using a bounding sphere instead, also to no apparent effect. public void CreateViewProjectionTransformsToFit(Camera camera, out Matrix viewTransform, out Matrix projectionTransform, out Vector3 position) { BoundingSphere cameraViewFrustumBoundingSphere = BoundingSphere.CreateFromFrustum(camera.ViewFrustum); float lightNearPlaneDistance = 1.0f; Vector3 lookAt = cameraViewFrustumBoundingSphere.Center; float distanceFromLookAt = cameraViewFrustumBoundingSphere.Radius + lightNearPlaneDistance; Vector3 directionFromLookAt = -Direction * distanceFromLookAt; position = lookAt + directionFromLookAt; viewTransform = Matrix.CreateLookAt(position, lookAt, Vector3.Up); float lightFarPlaneDistance = distanceFromLookAt + cameraViewFrustumBoundingSphere.Radius; float diameter = cameraViewFrustumBoundingSphere.Radius * 2.0f; Matrix.CreateOrthographic(diameter, diameter, lightNearPlaneDistance, lightFarPlaneDistance, out projectionTransform); //Vector3 cameraViewFrustumCentroid = camera.ViewFrustum.GetCentroid(); //position = cameraViewFrustumCentroid - (Direction * (camera.FarPlaneDistance - camera.NearPlaneDistance)); //viewTransform = Matrix.CreateLookAt(position, cameraViewFrustumCentroid, Up); //Vector3[] cameraViewFrustumCornersWS = camera.ViewFrustum.GetCorners(); //Vector3[] cameraViewFrustumCornersLS = new Vector3[8]; //Vector3.Transform(cameraViewFrustumCornersWS, ref viewTransform, cameraViewFrustumCornersLS); //Vector3 min = cameraViewFrustumCornersLS[0]; //Vector3 max = cameraViewFrustumCornersLS[0]; //for (int i = 1; i < 8; i++) //{ // min = Vector3.Min(min, cameraViewFrustumCornersLS[i]); // max = Vector3.Max(max, cameraViewFrustumCornersLS[i]); //} //// Clamp to nearest texel //float texelSize = 1.0f / Renderer.ShadowMapSize; //min.X -= min.X % texelSize; //min.Y -= min.Y % texelSize; //min.Z -= min.Z % texelSize; //max.X -= max.X % texelSize; //max.Y -= max.Y % texelSize; //max.Z -= max.Z % texelSize; //// We just use an orthographic projection matrix. The sun is so far away that it's rays are essentially parallel. //Matrix.CreateOrthographicOffCenter(min.X, max.X, min.Y, max.Y, -max.Z, -min.Z, out projectionTransform); } And here's the relevant part of the shader: if (CastShadows) { float4 positionLightCS = mul(float4(position, 1.0f), LightViewProj); float2 texCoord = clipSpaceToScreen(positionLightCS) + 0.5f / ShadowMapSize; float shadowMapDepth = tex2D(ShadowMapSampler, texCoord).r; float distanceToLight = length(LightPosition - position); float bias = 0.2f; if (shadowMapDepth < (distanceToLight - bias)) { return float4(0.0f, 0.0f, 0.0f, 0.0f); } } The shimmer is slightly better if I drastically reduce the view volume but I think that's mostly just because the texels become smaller and it's harder to notice them flickering back and forth. I'd appreciate any insight, I'd very much like to understand what's going on before I try other techniques.

    Read the article

  • Swap not available on System Monitor

    - by Zaki
    I had a swap partition of 1GB (RAM 1GB, Ubuntu 12.04 lts). Now swap is not shown on System Monitor neither can I hibernate my pc (sudo pm-hibernate). blkid output: /dev/sda1: UUID="B8B4FBB1B4FB706C" TYPE="ntfs" /dev/sda2: UUID="2ea7d608-2d89-4e41-9436-d05cb3ce8871" TYPE="swap" /dev/sda3: UUID="3219d03a-67e4-454b-8ce7-a27831846e35" TYPE="ext4" /dev/sda5: LABEL="Softwares" UUID="AC1CC3301CC2F47C" TYPE="ntfs" /dev/sda6: LABEL="Education" UUID="1E103E6C103E4B53" TYPE="ntfs" /dev/sda7: LABEL="Recreation" UUID="2CC8D181C8D149AA" TYPE="ntfs" /dev/sda8: LABEL="Miscellaneous" UUID="0274D6B174D6A727" TYPE="ntfs" /etc/fstab # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda6 during installation UUID=3219d03a-67e4-454b-8ce7-a27831846e35 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=2ea7d608-2d89-4e41-9436-d05cb3ce8871 none swap sw 0 0 free -m total used free shared buffers cached Mem: 991 867 123 0 27 418 -/+ buffers/cache: 421 569 Swap: 0 0 0 cat /proc/swaps Filename Type Size Used Priority fdisk -l Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x9f369f36 Device Boot Start End Blocks Id System /dev/sda1 * 63 31471334 15735636 7 HPFS/NTFS/exFAT /dev/sda2 31471616 33470447 999416 82 Linux swap / Solaris /dev/sda3 33472512 62539775 14533632 83 Linux /dev/sda4 62541045 312592769 125025862+ f W95 Ext'd (LBA) /dev/sda5 62541108 125066024 31262458+ 7 HPFS/NTFS/exFAT /dev/sda6 125066088 187591004 31262458+ 7 HPFS/NTFS/exFAT /dev/sda7 187591068 250115984 31262458+ 7 HPFS/NTFS/exFAT /dev/sda8 250116048 312576704 31230328+ 7 HPFS/NTFS/exFAT swapon --all swapon: /dev/sda2: swapon failed: Invalid argument dmesg | grep -A 5 -B 5 -i swap [ 9.487404] EXT4-fs (sda3): ext4_orphan_cleanup: deleting unreferenced inode 131645 [ 9.487413] EXT4-fs (sda3): ext4_orphan_cleanup: deleting unreferenced inode 131330 [ 9.487418] EXT4-fs (sda3): 16 orphan inodes deleted [ 9.487420] EXT4-fs (sda3): recovery complete [ 9.578600] EXT4-fs (sda3): mounted filesystem with ordered data mode. Opts: (null) [ 20.580539] Swap area shorter than signature indicates [ 20.588363] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready [ 20.619443] udevd[330]: starting version 175 [ 20.649959] lp: driver loaded but no devices found [ 20.662972] [drm] Initialized drm 1.1.0 20060810 [ 20.675515] i915 0000:00:02.0: setting latency timer to 64 -- [ 72.288573] PM: thaw of drv:sr dev:3:0:0:0 complete after 178.143 msecs [ 72.288578] PM: thaw of drv:scsi_device dev:3:0:0:0 complete after 178.136 msecs [ 72.299677] PM: thaw of drv:scsi_device dev:2:0:0:0 complete after 189.270 msecs [ 72.309473] PM: thaw of devices complete after 202.763 msecs [ 72.309668] PM: writing image. [ 72.309670] PM: Cannot find swap device, try swapon -a. [ 72.309699] PM: Cannot get swap writer [ 72.329896] Restarting tasks ... done. [ 72.331777] PM: Basic memory bitmaps freed [ 72.331792] video LNXVIDEO:00: Restoring backlight state [ 72.420048] option1 ttyUSB0: option_instat_callback: error -84 [ 72.804047] option1 ttyUSB0: option_instat_callback: error -84 -- [ 145.960625] sd 7:0:0:0: Attached scsi generic sg2 type 0 [ 145.972036] sd 7:0:0:0: [sdb] Attached SCSI removable disk [ 172.430508] PPP BSD Compression module registered [ 172.455583] PPP Deflate Compression module registered [ 332.260789] type=1400 audit(1381814763.342:27): apparmor="DENIED" operation="capable" parent=1 profile="/usr/sbin/cupsd" pid=636 comm="cupsd" pid=636 comm="cupsd" capability=36 capname="block_suspend" [ 1913.030998] Swap area shorter than signature indicates [ 2022.530155] type=1400 audit(1381816453.610:28): apparmor="DENIED" operation="capable" parent=1 profile="/usr/sbin/cupsd" pid=636 comm="cupsd" pid=636 comm="cupsd" capability=36 capname="block_suspend" [ 4062.729509] Swap area shorter than signature indicates Please help. Thanks in advance. df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 14G 6.1G 7.0G 47% / udev 488M 4.0K 488M 1% /dev tmpfs 199M 868K 198M 1% /run none 5.0M 4.0K 5.0M 1% /run/lock none 496M 224K 496M 1% /run/shm

    Read the article

  • links for 2010-12-23

    - by Bob Rhubart
    Oracle VM Virtualbox 4.0 extension packs (Wim Coekaerts Blog) Wim Coekaerts describes the the new extension pack in Oracle VM Virtualbox 4.0 and how it's different from 3.2 and earlier releases. (tags: oracle otn virtualization virtualbox) Oracle Fusion Middleware Security: Creating OES SM instances on 64 bit systems "I've already opened a bug on this against OES 10gR3 CP5, but in case anyone else runs into it before it gets fixed I wanted to blog it too. (NOTE: CP5 is when official support was introduced for running OES on a 64 bit system with a 64 bit JVM)" - Chris Johnson (tags: oracle otn fusionmiddleware security) Oracle Enterprise Manager Grid Control: Shared loader directory, RAC and WebLogic Clustering "RAC is optional. Even the load balancer is optional. The feed from the agents also goes to the load balancer on a different port and it is routed to the available management server. In normal case, this is ok." - Porus Homi Havewala (tags: WebLogic oracle otn grid clustering) Magic Web Doctor: Thought Process on Upgrading WebLogic Server to 11g "Upgrading to new versions can be challenging task, but it's done for linear scalability, continuous enhanced availability, efficient manageability and automatic/dynamic infrastructure provisioning at a low cost." - Chintan Patel (tags: oracle otn weblogic upgrading) InfoQ: Using a Service Bus to Connect the Supply Chain Peter Paul van de Beek presents a case study of using a service bus in a supply channel connecting a wholesale supplier with hundreds of retailers, the overall context and challenges faced – including the integration of POS software coming from different software providers-, the solution chosen and its implementation, how it worked out and the lessons learned along the way. (tags: ping.fm) Oracle VM VirtualBox 4.0 is released! - The Fat Bloke Sings The Fat Bloke spreads the news and shares some screenshots.  (tags: oracle otn virtualization virtualbox) Leaks on Wikis: "Corporations...You're Next!" Oracle Desktop Virtualization Can Help. (Oracle's Virtualization Blog) "So what can you do to guard against these types of breaches where there is no outsider (or even insider) intrusion to detect per se, but rather someone with malicious intent is physically walking out the door with data that they are otherwise allowed to access in their daily work?" - Adam Hawley (tags: oracle otn virtualization security) OTN ArchBeat Podcast Guest Roster As the OTN ArchBeat Podcast enters its third year, it's time to acknowledge the invaluable contributions of the guests who have participated in ArchBeat programs. Check out this who's who of ArchBeat podcast panelists, with links to their respective interviews and more. (tags: oracle otn oracleace podcast archbeat) Show Notes: Architects in the Cloud (ArchBeat) Now available! Part 2 (of 4) of the ArchBeat interview with Stephen G. Bennett and Archie Reed, the authors of "Silver Clouds, Dark Linings: A Concise Guide to Cloud Computing." (tags: oracle otn podcast cloud) A Cautionary Tale About Multi-Source JNDI Configuration (Scott Nelson's Portal Productivity Ponderings) "I ran into this issue after reading that p13nDataSource and cgDataSource-NonXA should not be configured as multi-source. There were some issues changing them to use the basic JDBC connection string and when rolling back to the bad configuration the server went 'Boom.'" - Scott Nelson (tags: weblogic jdbc oracle jndi)

    Read the article

  • Create a Remote Git Repository from an Existing XCode Repository

    - by codeWithoutFear
    Introduction Distributed version control systems (VCS’s), like Git, provide a rich set of features for managing source code.  Many development tools, including XCode, provide built-in support for various VCS’s.  These tools provide simple configuration with limited customization to get you up and running quickly while still providing the safety net of basic version control. I hate losing (and re-doing) work.  I have OCD when it comes to saving and versioning source code.  Save early, save often, and commit to the VCS often.  I also hate merging code.  Smaller and more frequent commits enable me to minimize merge time and effort as well. The work flow I prefer even for personal exploratory projects is: Make small local changes to the codebase to create an incrementally improved (and working) system. Commit these changes to the local repository.  Local repositories are quick to access, function even while offline, and provides the confidence to continue making bold changes to the system.  After all, I can easily recover to a recent working state. Repeat 1 & 2 until the codebase contains “significant” functionality and I have connectivity to the remote repository. Push the accumulated changes to the remote repository.  The smaller the change set, the less likely extensive merging will be required.  Smaller is better, IMHO. The remote repository typically has a greater degree of fault tolerance and active management dedicated to it.  This can be as simple as a network share that is backed up nightly or as complex as dedicated hardware with specialized server-side processing and significant administrative monitoring. XCode’s out-of-the-box Git integration enables steps 1 and 2 above.  Time Machine backups of the local repository add an additional degree of fault tolerance, but do not support collaboration or take advantage of managed infrastructure such as on-premises or cloud-based storage. Creating a Remote Repository These are the steps I use to enable the full workflow identified above.  For simplicity the “remote” repository is created on the local file system.  This location could easily be on a mounted network volume. Create a Test Project My project is called HelloGit and is located at /Users/Don/Dev/HelloGit.  Be sure to commit all outstanding changes.  XCode always leaves a single changed file for me after the project is created and the initial commit is submitted. Clone the Local Repository We want to clone the XCode-created Git repository to the location where the remote repository will reside.  In this case it will be /Users/Don/Dev/RemoteHelloGit. Open the Terminal application. Clone the local repository to the remote repository location: git clone /Users/Don/Dev/HelloGit /Users/Don/Dev/RemoteHelloGit Convert the Remote Repository to a Bare Repository The remote repository only needs to contain the Git database.  It does not need a checked out branch or local files. Go to the remote repository folder: cd /Users/Don/Dev/RemoteHelloGit Indicate the repository is “bare”: git config --bool core.bare true Remove files, leaving the .git folder: rm -R * Remove the “origin” remote: git remote rm origin Configure the Local Repository The local repository should reference the remote repository.  The remote name “origin” is used by convention to indicate the originating repository.  This is set automatically when a repository is cloned.  We will use the “origin” name here to reflect that relationship. Go to the local repository folder: cd /Users/Don/Dev/HelloGit Add the remote: git remote add origin /Users/Don/Dev/RemoteHelloGit Test Connectivity Any changes made to the local Git repository can be pushed to the remote repository subject to the merging rules Git enforces. Create a new local file: date > date.txt /li> Add the new file to the local index: git add date.txt Commit the change to the local repository: git commit -m "New file: date.txt" Push the change to the remote repository: git push origin master Now you can save, commit, and push/pull to your OCD hearts’ content! Code without fear! --Don

    Read the article

  • June is going to be a busy month!

    - by Monica Kumar
    Who says things slow down in summer? Well, maybe for school kids, but certainly not for Oracle's Virtualization team! June is turning out to be one of the busiest months for us. We are going to be participating in a number of industry events. If you happen to be at any of these, please stop by the Oracle booth and our session/s. Let's go through a run down of these events. 1. 13th Annual Call Center Week June 4-8 Ceasar's Palace, Las Vegas  Event website You're now wondering...why are we at this call center show. It's really simple, Oracle's Desktop Virtualization solutions offer the best way for call center to reliably and securely access enterprise apps using a variety of endpoint devices such as an iPad or a Sun Ray Client. Provisioning new employees becomes a breeze. We'll be jointly showcasing our solution with Oracle's CRM team. Come check us out.  2. Gartner Infrastructure & Management, Florida June 5-7 Orlando, FL  Event website Oracle is a Premier sponsor of the Gartner IOM Summit this June 5 – 7, 2012 in Orlando, FL.  Attendees will have the opportunity to meet with Oracle experts in a variety of sessions, including demonstrations during the showcase receptions. 3. Cloud Expo East Check out our website for details of our participation. Stop by at booth 511 to talk to our Cloud, Virtualization and Big Data experts. In addition, we're delivering a number of sessions at Cloud Expo. The one I want to highlight is the following: Session: Borderless Applications in the Cloud with Oracle VM and Oracle Virtual Assembly Builder Abstract: As virtualization adoption progresses beyond server consolidation, this is also transforming how enterprise applications are deployed and managed in an agile environment. The traditional method of business-critical application deployment where administrators have to contend with an array of unrelated tools, custom scripts to deploy and manage applications, OS and VM instances into a fast changing cloud computing environment can no longer scale effectively to achieve response time and desired efficiency. Oracle VM and Oracle Virtual Assembly Builder allow applications, associated components, deployment metadata, management policies and best practices to be encapsulated into ready-to-run VMs for rapid, repeatable deployment and ease of management. Join us in this Cloud Expo session to see how Oracle VM and Oracle Virtual Assembly Builder allow you to deploy complex multi-tier applications in minutes and enables you to easily onboard existing applications to cloud environments.  Get your free Cloud Expo pass now!  We're offering complimentary VIP Gold Passes. Go to https://www.blueskyz.com/v3/Login.aspx?ClientID=19&EventID=56&sg=177, click “Continue” if you are a New User or log-in if you have already created an account. Once there, you can view the Agenda or Register for Cloud Expo. To register - fill out the basic business card questions and then enter oracleVIPgold in the Priority Code field to change the price from $2,000 to $0. 4. CiscoLive 2012  June 10-14 San Diego, CA Event website Our Oracle VM and Oracle Linux experts will talk about joint collaboration with Cisco on UCS. We'll also highlight customer use cases. 5. Gartner Infrastructure & Operations Management Summit, EMEA Dates: June 11-12 Frankfurt, Germany Event website Meet experts from our Virtualization and Linux team in EMEA. Stop by our booth and find out what's new in Oracle VM Server for x86 and Oracle Linux. June is going to be busy.

    Read the article

  • Impressions on jQuery Mobile

    - by Jeff
    For the uninitiated, jQuery Mobile is a sweet little client framework that turns regular HTML into something more touch and mobile friendly. It results in a user interface that has bigger targets, rounded corners and simple skinning capability. When it was announced that ASP.NET MVC 4 would include support for a mobile-sensitive view engine, offering up alternate views for clients that fit the mobile profile, I was all over that. Combined with jQuery Mobile, it brought a chance to do some experimentation. I blitzed through the views in POP Forums and converted them all to mobile views. (For the curious, this first pass can be found here on CodePlex, while a more recent update that uses RC 2 of jQuery Mobile v1.1.0 is running on the demo site.) Initially, it was kind of a mixed bag. The jQuery demo site also acts as documentation, and it’s reasonably complete. I had no problem getting up a lot of basic views quickly, splitting out portions of some pages as subpages that they quickly load in. The default behavior in the older version was to slide the pages in, which looked a little weird when you were using a back button. They’ve since changed it so the default transition is a fade in/out. Because you’re dealing with Web pages, I don’t think anyone is really under the illusion that you’re not using a native app, so I don’t know that this matters. I’ve tested extensively on iPad and Windows Phone, and to be honest, I’ve encountered a lot of issues. On Windows Phone, there is some kind of inconsistency that prevents the proper respect for the viewport settings. The text background on text fields (for labeling) doesn’t work, either. On both platforms, certain in-DOM page navigation links work only half of the time. Is this an issue of user error? Probably, but that’s what’s frustrating about it. Most of what you accomplish with this framework involves decorating various elements with CSS classes. There isn’t any design-time safety to speak of to make sure that you’re doing it right. I think the issues can be overcome, but there are some trade-offs to consider. The first is download size. Yes, the scripts and CSS do get cached, but that first hit will cost nearly 40k for the mobile parts. That’s still a lot when you’re on some crappy AT&T EDGE network, or hotel Wi-Fi. Then you have to ask yourself, do you really want your app to look like it’s native to iOS? I’m not saying that’s a bad thing, because consistent UI is good, but you will end up feeling a whole lot of sameness, and maybe you don’t want that. I did some experimentation to try and Metro-ize the jQuery Mobile theme, and it’s kind of a mixed bag. It mostly works, but you get some weirdness on badges and with buttons that I’m not crazy about. It probably just means you need to keep tweaking. At this point, I’m a little torn about whether or not I’ll use it for POP Forums or one of the sites I’m working on. The benefits are pretty strong, but figuring out where I’m doing it wrong is proving a little time consuming.

    Read the article

  • SQL 2012 Licensing Thoughts

    - by Geoff N. Hiten
    The only thing more controversial than new Federal Tax plans is new Licensing plans from Microsoft.  In both cases, everyone calculates several numbers.  First, will I pay more or less under this plan?  Second, will my competition pay more or less than now?  Third, will <insert interesting person/company here> pay more or less?  Not that items 2 and 3 are meaningful, that is just how people think. Much like tax plans, the devil is in the details, so lets see how this looks.  Microsoft shows it here: http://www.microsoft.com/sqlserver/en/us/future-editions/sql2012-licensing.aspx First up is a switch from per-socket to per-core licensing.  Anyone who didn’t see something like this coming should rapidly search for a new line of work because you are not paying attention.  The explosion of multi-core processors has made SQL Server a bargain.  Microsoft is in business to make money and the old per-socket model was not going to do that going forward. Per-core licensing also simplifies virtualization licensing.  Physical Core = Virtual Core, at least for licensing.  Oversubscribe your processors, that’s your lookout.  You still pay for  what is exposed to the VM.  The cool part is you can seamlessly move physical and virtual workloads around and the licenses follow.  The catch is you have to have Software Assurance to make the licenses mobile.  Nice touch there. Let’s have a moment of silence for the late, unlamented, largely ignored Workgroup Edition.  To quote the Microsoft  FAQ:  “Standard becomes our sole edition for basic database needs”.  Considering I haven’t encountered a singe instance of SQL Server Workgroup Edition in the wild, I don’t think this will be all that controversial. As for pricing, it looks like a wash with current per-socket pricing based on four core sockets.  Interestingly, that is the minimum core count Microsoft proposes to swap to transition per-socket to per-core if you are on Software Assurance.  Reading the fine print shows that if you are using more, you will get more core licenses: From the licensing FAQ. 15. How do I migrate from processor licenses to core licenses?  What is the migration path? Licenses purchased with Software Assurance (SA) will upgrade to SQL Server 2012 at no additional cost. EA/EAP customers can continue buying processor licenses until your next renewal after June 30, 2012. At that time, processor licenses will be exchanged for core-based licenses sufficient to cover the cores in use by processor-licensed databases (minimum of 4 cores per processor for Standard and Enterprise, and minimum of 8 EE cores per processor for Datacenter). Looks like the folks who invested in the AMD 12-core chips will make out like bandits. Now, on to something new: SQL Server Business Intelligence Edition. Yep, finally a BI-specific SKU licensed for server+CAL configurations only.  Note that Enterprise Edition still supports the complete feature set; the BI Edition is intended for smaller shops who want to use the full BI feature set but without needing Enterprise Edition scale (or costs).  No, you don’t get ColumnStore, Compression, or Partitioning in the BI Edition.  Those are Enterprise scale features, ThankYouVeryMuch.  Then again, your starting licensing costs are about one sixth of an Enterprise Edition system (based on an 8 core server). The only part of the message I am missing is if the current Failover Licensing Policy will change.  Do we need to fully or partially license failover servers?  That is a detail I definitely want to know.

    Read the article

  • Custom Templates: Using user exits

    - by Anthony Shorten
    One of the features of Oracle Utilities Application Framework V4.1 is the ability to use templates and user exits to extend the base configuration files. The configuration files used by the product are based upon a set of templates shipped with the product. When the configureEnv utility asks for configuration settings they are stored in a configuration file ENVIRON.INI which outlines the environment settings. These settings are then used by the initialSetup utility to populate the various configuration files used by the product using templates located in the templates directory of the installation. Now, whilst the majority of the installations at any site are non-production and the templates provided are generally adequate for that need, there are circumstances where extension of templates are needed to take advantage of more advanced facilities (such as advanced security and environment settings). The issue then becomes that if you alter the configuration files manually (directly or indirectly) then you may lose all your custom settings the next time you run initialSetup. To counter this we allow customers to either override templates with their own template or we now provide user exits in the templates to add fragments of configuration unique to that part of the configuration file. The latter means that the base template is still used but additions are included to provide the extensions. The provision of custom templates is supported but as soon as you use a custom template you are then responsible for reflecting any changes we put in the base template over time. Not a big task but annoying if you have to do it for multiple copies of the product. I prefer to use user exits as they seem to represent the least effort solution. The way to find the user exits available is to either read the Server Administration Guide that comes with your product or look at individual templates and look for the lines: #ouaf_user_exit <user exit name> Where <user exit name> is the name of the user exit. User exits are not always present but are in places that we feel are the most likely to be changed. If a user exit does not exist the you can always use a custom template instead. Now lets show an example. By default, the product generates a config.xml file to be used with Oracle WebLogic. This configuration file has the basic setting contained in it to manage the product. If you want to take advantage of the Oracle WebLogic advanced settings, you can use the console to make those changes and it will be reflected in the config.xml automatically. To retain those changes across invocations of initialSetup, you need to alter the template that generates the config.xml or use user exits. The technique is this. Make the change in the console and when you save the change, WebLogic will reflect it in the config.xml for you. Compare the old version and new version of the config.xml and determine what to add and then find the user exit to put it in by examining the base template. For example, by default, the console is not automatically deployed (it is deployed on demand) in the base config.xml. To make the console deploy, you can add the following line to the templates/CM_config.xml.win.exit_3.include file (for windows) or templates/CM_config.xml.exit_3.include file (for linux/unix): <internal-apps-deploy-on-demand-enabled>false</internal-apps-deploy-on-demand-enabled> Now run initialSetup to reflect the change and if you check the splapp/config/config.xml file you will see the change applied for you. Now how did I know which include file? I check the template for config.xml and found there was an user exit at the right place. I prefixed my include filename with "CM_" to denote it as a custom user exit. This will tell the upgrade tools to leave that file alone whenever you decide to upgrade (or even apply fixes). User exits can be powerful and allow customizations to be added for advanced configuration. You will see products using Oracle Utilities Application Framework use this exits themselves (usually prefixed with the product code). You are also taking advantage of them.

    Read the article

  • Building on someone else's DefaultButton Silverlight work...

    - by KyleBurns
    This week I was handed a "simple" requirement - have a search screen execute its search when the user pressed the Enter key instead of having to move hands from keyboard to mouse and click Search.  That is a reasonable request that has been met for years both in Windows and Web apps.  I did a quick scan for code to pilfer and found Patrick Cauldwell's Blog posting "A 'Default Button' In Silverlight".  This posting was a great start and I'm glad that the basic work had been done for me, but I ran into one issue - when using bound textboxes (I'm a die-hard MVVM enthusiast when it comes to Silverlight development), the search was being executed before the textbox I was in when the Enter key was pressed updated its bindings.  With a little bit of reflection work, I think I have found a good generic solution that builds upon Patrick's to make it more binding-friendly.  Also, I wanted to set the DefaultButton at a higher level than on each TextBox (or other control for that matter), so the use of mine is intended to be set somewhere such as the LayoutRoot or other high level control and will apply to all controls beneath it in the control tree.  I haven't tested this on controls that treat the Enter key special themselves in the mix. The real change from Patrick's solution here is that in the KeyUp event, I grab the source of the KeyUp event (in my case the textbox containing search criteria) and loop through the static fields on the element's type looking for DependencyProperty instances.  When I find a DependencyProperty, I grab the value and query for bindings.  Each time I find a binding, UpdateSource is called to make sure anything bound to any property of the field has the opportunity to update before the action represented by the DefaultButton is executed. Here's the code: public class DefaultButtonService { public static DependencyProperty DefaultButtonProperty = DependencyProperty.RegisterAttached("DefaultButton", typeof (Button), typeof (DefaultButtonService), new PropertyMetadata (null, DefaultButtonChanged)); private static void DefaultButtonChanged(DependencyObject d, DependencyPropertyChangedEventArgs e) { var uiElement = d as UIElement; var button = e.NewValue as Button; if (uiElement != null && button != null) { uiElement.KeyUp += (sender, arg) => { if (arg.Key == Key.Enter) { var element = arg.OriginalSource as FrameworkElement; if (element != null) { UpdateBindings(element); } if (button.IsEnabled) { button.Focus(); var peer = new ButtonAutomationPeer(button); var invokeProv = peer.GetPattern(PatternInterface.Invoke) as IInvokeProvider; if (invokeProv != null) invokeProv.Invoke(); arg.Handled = true; } } }; } } public static DefaultButtonService GetDefaultButton(UIElement obj) { return (DefaultButtonService) obj.GetValue(DefaultButtonProperty); } public static void SetDefaultButton(DependencyObject obj, DefaultButtonService button) { obj.SetValue(DefaultButtonProperty, button); } public static void UpdateBindings(FrameworkElement element) { element.GetType().GetFields(BindingFlags.Public | BindingFlags.Static).ForEach(field => { if (field.FieldType.IsAssignableFrom(typeof(DependencyProperty))) { try { var dp = field.GetValue(null) as DependencyProperty; if (dp != null) { var binding = element.GetBindingExpression(dp); if (binding != null) { binding.UpdateSource(); } } } // ReSharper disable EmptyGeneralCatchClause catch (Exception) // ReSharper restore EmptyGeneralCatchClause { // swallow exceptions } } }); } }

    Read the article

  • Critical Threads Optimization

    - by Rafael Vanoni
    Background One of the more common issues we've been seeing in the field is the growing difficulty in optimizing performance of multi-threaded applications. A good portion of this difficulty is due to the increasing complexity of modern processors that present various degrees of sharing relationships between hardware components. Take any current CMT processor and you'll find any number of CPUs sharing execution pipelines, floating point units, caches, etc. Consequently, applying the traditional recipe of one software thread for each CPU will have varying degrees of success, according to the layout of the underlying hardware. On top of this increasing complexity we've also seen processors with features that aim at dynamically resourcing software threads according to their utilization. Intel's Turbo Boost allows processors to increase their operating frequency if there is enough thermal headroom available and the processor isn't fully utilized. More recently, the SPARC T4 processor introduced dynamic threading, allowing each core to dynamically allocate more resources to its active CPUs. Both cases are in essence recognizing that current processors will be running a wide mix of workloads, some will be designed for throughput, others for low latency. The hardware is providing mechanisms to dynamically resource threads according to their runtime behavior. We're very aware of these challenges in Solaris, and have been working to provide the best out of box performance while providing mechanisms to further optimize applications when necessary. The Critical Threads Optimzation was introduced in Solaris 10 8/11 and Solaris 11 as one such mechanism that allows customers to both address issues caused by contention over shared hardware resources and explicitly take advantage of features such as T4's dynamic threading. What it is The basic idea is to allow performance critical threads to execute with more exclusive access to hardware resources. For example, when deploying an application that implements a producer/consumer model, it'll likely be advantageous to give the producer more exclusive access to the hardware instead of having it competing for resources with all the consumers. In the case of a T4 based system, we may want to have a producer running by itself on a single core and create one consumer for each of the remaining CPUs. With the Critical Threads Optimization we're extending the semantics of scheduling priorities (which thread should run first) to include priority over shared resources (which thread should have more "space"). Now the scheduler will not only run higher priority threads first: it will also provide them with more exclusive access to hardware resources if they are available. How does it work ? Using the previous example in Solaris 11, all you'd have to do would be to place the producer in the Fixed Priority (FX) scheduling class at priority 60, or in the Real Time (RT) class at any priority and Solaris will try to give it more "hardware space". On both Solaris 10 8/11 and Solaris 11 this can be achieved through the existing priocntl(1,2) and priocntlset(2) interfaces. If your application already assigns these priorities to performance critical threads, there's no additional step you need to take. One important aspect of this optimization is that it requires some level of idleness in the system, either as a result of sizing the application before hand or through periods of transient idleness during runtime. If the system is fully committed, the scheduler will put all the available CPUs to work.Best practices If you're an application developer, we encourage you to look into assigning the right priorities for the different threads in your application. Solaris provides different scheduling classes (Time Share, Interactive, Fair Share, Fixed Priority and Real Time) that offer different policies and behaviors. It is not always simple to figure out which set of threads are critical to the performance of a workload, and it may not always be feasible to take advantage of this optimization, but we believe that this can be correctly (and safely) done during development. Overall, the out of box performance in Solaris should meet your workload's requirements. If you are looking into that extra bit of performance, then the Critical Threads Optimization may be what you're looking for.

    Read the article

  • Controlling server configurations with IPS

    - by barts
    I recently received a customer question regarding how they best could control which packages and which versions were used on their production Solaris 11 servers.  They had considered pointing each server at its own software repository - a common initial approach.  A simpler method leverages one of dependency mechanisms we introduced with Solaris 11, but is not immediately obvious to most people. Typically, most internal IT departments qualify particular versions for production use.  What this customer wanted to do was insure that their operations staff only installed internally qualified versions of Solaris on their servers.  The easiest way of doing this is to leverage the 'incorporate' type of dependency in a small package defined for each server type.  From the reference " Packaging and Delivering Software With the Image Packaging System in Oracle® Solaris 11.1":  The incorporate dependency specifies that if the given package is installed, it must be at the given version, to the given version accuracy. For example, if the dependent FMRI has a version of 1.4.3, then no version less than 1.4.3 or greater than or equal to 1.4.4 satisfies the dependency. Version 1.4.3.7 does satisfy this example dependency. The common way to use incorporate dependencies is to put many of them in the same package to define a surface in the package version space that is compatible. Packages that contain such sets of incorporate dependencies are often called incorporations. Incorporations are typically used to define sets of software packages that are built together and are not separately versioned. The incorporate dependency is heavily used in Oracle Solaris to ensurethat compatible versions of software are installed together. An example incorporate dependency is: depend type=incorporate fmri=pkg:/driver/network/ethernet/[email protected],5.11-0.175.0.0.0.2.1 So, to make sure only qualified versions are installed on a server, create a package that will be installed on the machines to be controlled.  This package will contain an incorporate dependency on the "entire" package, which controls the various components used to be build Solaris.  Every time a new version of Solaris has been qualified for production use, create a new version of this package specifying the new version of "entire" that was qualified.  Once this new control package is available in the repositories configured on the production server, the pkg update command will update that system to the specified version.  Unless a new version of the control package is made available, pkg update will report that no updates are available since no version of the control package can be installed that satisfies the incorporate constraint. Note that if desired, the same package can be used to specify which packages must be present on the system by adding either "require" or "group" dependencies; the latter permits removal of some of the packages, the former does not.  More details on this can be found in either the section 5 pkg man page or the previously mentioned reference document. This technique of using package dependencies to constrain system configuration leverages the SAT solver which is at the heart of IPS, and is basic to how we package Solaris itself.  

    Read the article

  • ArchBeat Link-o-Rama Top 10 for December 9-15, 2012

    - by Bob Rhubart
    You click, we listen. The following list reflects the Top 10 most popular items posted on the OTN ArchBeat Facefbook page for the week of December 9-15, 2012. DevOps Basics II: What is Listening on Open Ports and Files – WebLogic Essentials | Dr. Frank Munz "Can you easily find out which WebLogic servers are listening to which port numbers and addresses?" asks Dr. Frank Munz. The good doctor has an answer—and a tech tip. Using OBIEE against Transactional Schemas Part 4: Complex Dimensions | Stewart Bryson "Another important entity for reporting in the Customer Tracking application is the Contact entity," says Stewart Bryson. "At first glance, it might seem that we should simply build another dimension called Dim – Contact, and use analyses to combine our Customer and Contact dimensions along with our Activity fact table to analyze Customer and Contact behavior." SOA 11g Technology Adapters – ECID Propagation | Greg Mally "Many SOA Suite 11g deployments include the use of the technology adapters for various activities including integration with FTP, database, and files to name a few," says Oracle Fusion Middleware A-Team member Greg Mally. "Although the integrations with these adapters are easy and feature rich, there can be some challenges from the operations perspective." Greg's post focuses on technical tips for dealing with one of these challenges. Podcast: DevOps and Continuous Integration In Part 1 of a 3-part program, panelists Tim Hall (Senior Director of product management for Oracle Enterprise Repository and Oracle’s Application Integration Architecture), Robert Wunderlich (Principal Product Manager for Oracle’s Application Integration Architecture Foundation Pack) and Peter Belknap (Director of product management for Oracle SOA Integration) discuss why DevOps matters and how it changes development methodologies and organizational structure. Good To Know - Conflicting View Objects and Shared Entity | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovskis shares his thoughts -- and a sample application -- dealing with an "interesting ADF behavior" encountered over the weekend. Cloud Deployment Models | B. R. Clouse Looking out for the cloud newbies... "As the cloud paradigm grows in depth and breadth, more readers are approaching the topic for the first time, or from a new perspective," says B. R. Clouse. "This blog is a basic review of cloud deployment models, to help orient newcomers and neophytes." Service governance morphs into cloud API management | David Linthicum "When building and using clouds, the ability to manage APIs or services is the single most important item you can provide to ensure the success of the project," says David Linthicum. "But most organizations driving a cloud project for the first time have no experience handling a service-based architecture and don't see the need for API management until after deployment. By then, it's too late." Oracle Fusion Middleware Security: Password Policy in OAM 11g R2 | Rob Otto Rob Otto continues the Oracle Fusion Middleware A-Team "Oracle Access Manager Academy" series with a detailed look at OAM's ability to support "a subset of password management processes without the need to use Oracle Identity Manager and LDAP Sync." Understanding the JSF Lifecycle and ADF Optimized Lifecycle | Steven Davelaar Could you call that a surprise ending? Oracle WebCenter & ADF Architecture Team (A-Team) member learned a lot more than he expected while creating a UKOUG presentation entitled "What you need to know about JSF to be succesful with ADF." Expanding on requestaudit - Tracing who is doing what...and for how long | Kyle Hatlestad "One of the most helpful tracing sections in WebCenter Content (and one that is on by default) is the requestaudit tracing," says Oracle Fusion Middleware A-Team architect Kyle Hatlestad. Get up close and technical in his post. Thought for the Day "There is no code so big, twisted, or complex that maintenance can't make it worse." — Gerald Weinberg Source: SoftwareQuotes.com

    Read the article

  • Acceptance tests done first...how can this be accomplished?

    - by Crazy Eddie
    The basic gist of most Agile methods is that a feature is not "done" until it's been developed, tested, and in many cases released. This is supposed to happen in quick turnaround chunks of time such as "Sprints" in the Scrum process. A common part of Agile is also TDD, which states that tests are done first. My team works on a GUI program that does a lot of specific drawing and such. In order to provide tests, the testing team needs to be able to work with something that at least attempts to perform the things they are trying to test. We've found no way around this problem. I can very much see where they are coming from because if I was trying to write software that targeted some basically mysterious interface I'd have a very hard time. Although we have behavior fairly well specified, the exact process of interacting with various UI elements when it comes to automation seems to be too unique to a feature to allow testers to write automated scripts to drive something that does not exist. Even if we could, a lot of things end up turning up later as having been missing from the specification. One thing we considered doing was having the testers write test "scripts" that are more like a set of steps that must be performed, as described from a use-case perspective, so that they can be "automated" by a human being. This can then be performed by the developer(s) writing the feature and/or verified by someone else. When the testers later get an opportunity they automate the "script" for regression purposes mainly. This didn't end up catching on in the team though. The testing part of the team is actually falling behind us by quite a margin. This is one reason why the apparently extra time of developing a "script" for a human being to perform just did not happen....they're under a crunch to keep up with us developers. If we waited for them, we'd get nothing done. It's not their fault really, they're a bottle neck but they're doing what they should be and working as fast as possible. The process itself seems to be set up against them. Very often we end up having to go back a month or more in what we've done to fix bugs that the testers have finally gotten to checking. It's an ugly truth that I'd like to do something about. So what do other teams do to solve this fail cascade? How can we get testers ahead of us and how can we make it so that there's actually time for them to write tests for the features we do in a sprint without making us sit and twiddle our thumbs in the meantime? As it's currently going, in order to get a feature "done", using agile definitions, would be to have developers work for 1 week, then testers work the second week, and developers hopefully being able to fix all the bugs they come up with in the last couple days. That's just not going to happen, even if I agreed it was a reasonable solution. I need better ideas...

    Read the article

< Previous Page | 416 417 418 419 420 421 422 423 424 425 426 427  | Next Page >