Search Results

Search found 23603 results on 945 pages for 'non technical manager'.

Page 709/945 | < Previous Page | 705 706 707 708 709 710 711 712 713 714 715 716  | Next Page >

  • Limiting DOPs &ndash; Who rules over whom?

    - by jean-pierre.dijcks
    I've gotten a couple of questions from Dan Morgan and figured I start to answer them in this way. While Dan is running on a big system he is running with Database Resource Manager and he is trying to make sure the system doesn't go crazy (remember end user are never, ever crazy!) on very high DOPs. Q: How do I control statements with very high DOPs driven from user hints in queries? A: The best way to do this is to work with DBRM and impose limits on consumer groups. The Max DOP setting you can set in DBRM allows you to overwrite the hint. Now let's go into some more detail here. Assume my object (and for simplicity we assume there is a single object - and do remember that we always pick the highest DOP when in doubt and when conflicting DOPs are available in a query) has PARALLEL 64 as its setting. Assume that the query that selects something cool from that table lives in a consumer group with a max DOP of 32. Assume no goofy things (like running out of parallel_max_servers) are happening. A query selecting from this table will run at DOP 32 because DBRM caps the DOP. As of 11.2.0.1 we also use the DBRM cap to create the original plan (at compile time) and not just enforce the cap at runtime. Now, my user is smart and writes a query with a parallel hint requesting DOP 128. This query is still capped by DBRM and DBRM overrules the hint in the statement. The statement, despite the hint, runs at DOP 32. Note that in the hinted scenario we do compile the statement with DOP 128 (the optimizer obeys the hint). This is another reason to use table decoration rather than hints. Q: What happens if I set parallel_max_servers higher than processes (e.g. the max number of processes allowed to run on my machine)? A: Processes rules. It is important to understand that processes are fixed at startup time. If you increase parallel_max_servers above the number of processes in the processes parameter you should get a warning in the alert log stating it can not take effect. As a follow up, a hinted query requesting more parallel processes than either parallel_max_servers or processes will not be able to acquire the requested number. Parallel_max_processes will prevent this. And since parallel_max_servers should be lower than max processes you can never go over either...

    Read the article

  • Accessing JMX for Oracle WebLogic 11g

    - by Anthony Shorten
    In Oracle Utilities Application Framework V4, we use the latest Oracle WebLogic release (11g). The instructions below illustrate a way of allowing a console like jconsole to remotely monitor and manage Oracle WebLogic using the JMX Mbeans. Typically management of Oracle WebLogic is done from Oracle Enterprise Manager or the Oracle Weblogic console application but you can also use JMX. To access the JMX capability for Oracle WebLogic 11g, for an Oracle Utilities Application Framework based product, using a JMX console (such as jconsole) the following process needs to be performed: Enable the JMX Management Server in the Oracle WebLogic console at splapp - Configuration - General - Advanced Settings option. Enable both Compatibility Mbean Server Enabled and Management EJB Enabled (this enables the legacy and new JMX interface). Save the changes This change will require a restart. In the startup of the Oracle WebLogic server in the $SPLSYSTEMLOGS/myserver.log (or %SPLESYSTEMLOGS%\myserver.log on Windows) you will see the BEA-149512 message indicating the Mbean servers have been started. The message will indicate the JMX URL that can be used to access the JMX Mbeans. The URL is in the format: service:jmx:iiop://host:port/jndi/mbeanserver where: host - Oracle WebLogic host name port - Oracle WebLogic port number mbeanserver - Mbean Server to access. Valid Values: weblogic.management.mbeanservers.runtime weblogic.management.mbeanservers.edit weblogic.management.mbeanservers.domainruntime For illustrative purposes we will use the domainruntime Mbean. Ensure that you execute the splenviron[.sh] utility to set the appropriate environment variables for the desired environment. Execute the following jconsole command to initiate the connection to the JMX Mbean server Windows: jconsole -J-Djava.class.path=%JAVA_HOME%\lib\jconsole.jar;%WL_HOME%\server\lib\wljmxclient.jar -J-Djmx.remote.protocol.provider.pkgs=weblogic.management.remote Linux/Unix jconsole -J-Djava.class.path=$JAVA_HOME/lib/jconsole.jar;$WL_HOME/server/lib/wljmxclient.jar -J-Djmx.remote.protocol.provider.pkgs=weblogic.management.remote You will see a New Connection Dialog. Specify the URL from the previous steps into the Remote process (i.,e. service:jmx:iiop...). The credentials are the credentials specified for the Oracle WebLogic console. You are now able to view the JMX classes available. Here is an example from my demonstration machine: Refer to the Oracle WebLogic Mbean documentation to understand the output.

    Read the article

  • SQL SERVER – SQL Server Misconceptions and Resolution – A Practical Perspective – TechEd 2012 India

    - by pinaldave
    TechEd India 2012 is just around the corner and I will be presenting there in two different sessions. On the very first day of this event, my presentation will be all about SQL Server Misconceptions and Resolution – A Practical Perspective. The dictionary tells us that a “misconception” means a view or opinion that is incorrect and is based on faulty thinking or understanding. In SQL Server, there are so many misconceptions. In fact, when I hear some of these misconceptions, I feel like fainting at that very moment! Seriously, at one time, I came across the scenario where instead of using INSERT INTO…SELECT, the developer used CURSOR believing that cursor is faster (duh!). Here is the link the blog post related to this. Pinal and Vinod in 2009 I have been presenting in TechEd India for last three years. This is my fourth opportunity to present a technical session on SQL Server. Just like the previous years, I decided to present something different. Here is a novelty of this year: I will be presenting this session with Vinod Kumar. Vinod Kumar and I have a great synergy when we work together. So far, we have written one SQL Server Interview Questions and Answers book and 2 video courses: (1) SQL Server Questions and Answers (2) SQL Server Performance: Indexing Basics. Pinal and Vinod in 2011 When we sat together and started building an outline for this course, we had many options in mind for this tango session. However, we have decided that we will make this session as lively as possible while keeping it natural at the same time. We know our flow and we know our conversation highlight, but we do not know what exactly each of us is going to present. We have decided to challenge each other on stage and push each other’s knowledge to the verge. We promise that the session will be entertaining with lots of SQL Server trivia, tips and tricks. Here are the challenges that I’ll take on: I will puzzle Vinod with my difficult questions I will present such misconception that Vinod will have no resolution for it. I need your help.  Will you help me stump Vinod? If yes, come and attend our session and join me to prove that together we are superior (a friendly brain clash, but we must win!). SQL Server enthusiasts and SQL Server fans are going to have gala time at #TechEdIn as we have a very solid lineup of the speaker and extremely interesting sessions at TechEdIn. Read the complete blog post of Vinod. Session Details Title: SQL Server Misconceptions and Resolution – A Practical Perspective (Add to Calendar) Abstract: “Earth is flat”! – An ancient common misconception, which has been proven incorrect as we progressed in modern times. In this session we will see various database misconceptions prevailing and their resolution with the aid of the demos. In this unique session audience will be part of the conversation and resolution. Date and Time: March 21, 2012, 15:15 to 16:15 Location: Hotel Lalit Ashok - Kumara Krupa High Grounds, Bengaluru – 560001, Karnataka, India. Add to Calendar Please submit your questions in the comments area and I will be for sure discussing them during my session. If I pick your question to discuss during my session, here is your gift I commit right now – SQL Server Interview Questions and Answers Book. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Interview Questions and Answers, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • links for 2010-04-28

    - by Bob Rhubart
    Guido Schmutz: Oracle BPM11g available! Oracle ACE Director Guido Schmutz shares his impressions after attending a hands-on workshop conducted by Masons of SOA member Clemens Utschig-Utschig. (tags: oracle otn oracleace bpm soa soasuite) Elena Zannoni : 2010 Collaboration Summit Impressions Elena Zannoni has collected her thoughts on #C10 and shares them in this great blog post. (tags: oracle otn linux architecture collaborate2010) Hajo Normann: BPMN 2.0 in Oracle BPM Suite: The future of BPM starts now "The BPM Studio sets itself apart from pure play BPMN 2.0 tools by being seamlessly integrated inside a holistic SOA / BPM toolset: BPMN models are placed in SCA-Composites in SOA Suite 11g. This allows to abstract away the complexities of SOA integration aspects from business process aspects. For UIs in BPMN tasks, you have the richness of ADF 11g based Frontends." -- Oracle ACE Director and Masons of SOA member Hajo Normann (tags: oracle otn oracleace bpm soa sca) Brain Dirking: AIIM Best Practice Awards to Two Oracle Customers Brian Dirking's great write-up of the AIIM Awards Banquet, at which the Bureau of Indian Affairs and the Charles Town Police Department were among the winners of the 2010 Carl E. Nelson Best Practices Awards. (tags: oracle otn aiim bpm ecm enterprise2.0) Mark Wilcox: Upcoming Directory Services Live Webcast - Improve Time-to-Market and Reduce Cost with Oracle Directory Services Live Webcast: Improve Time-to-Market and Reduce Cost with Oracle Directory Services Event Date: Thursday, May 27, 2010 Event Time: 10:00 AM Pacific Standard Time / 1:00 Eastern Standard Time (tags: oracle otn webcast security identitymanagement) Celine Beck: Introducing AutoVue Document Print Service Celine Beck offers a detailed overview of Oracle AutoVue. (tags: oracle otn enatarch visualization printing) Vikas Jain: What's new in OWSM 11gR1 PS2 (11.1.1.3.0) ? Vikas Jain shares links to resources relevant to the recently releases patch set for Oracle Web Services Manager 11gR1. (tags: oracle otn soa webservices oswm) @theovanarem: Oracle SOA Suite 11g Release 1 Patch Set 2 Theo Van Arem shares links to several resources relevant to the release of the latest patch set for Oracle SOA Suite 11g. (tags: oracle otn soa soasuite middleware) @vambenepe: Analyzing the VMforce announcement "The new thing is that force.com now supports an additional runtime, in addition to Apex. That new runtime uses the Java language, with the constraint that it is used via the Spring framework. Which is familiar territory to many developers. That’s it." -- William Vambenepe (tags: oracle otn cloud paas)

    Read the article

  • SQL SERVER – Automation Process Good or Ugly

    - by pinaldave
    This blog post is written in response to T-SQL Tuesday hosted by SQL Server Insane Asylum. The idea of this post really caught my attention. Automation – something getting itself done after the initial programming, is my understanding of the subject. The very next thought was – is it good or evil? The reality is there is no right answer. However, what if we quickly note a few things, then I would like to request your help to complete this post. We will start with the positive parts in SQL Server where automation happens. The Good If I start thinking of SQL Server and Automation the very first thing that comes to my mind is SQL Agent, which runs various jobs. Once I configure any task or job, it runs fine (till something goes wrong!). Well, automation has its own advantages. We all have used SQL Agent for so many things – backup, various validation jobs, maintenance jobs and numerous other things. What other kinds of automation tasks do you run in your database server? The Ugly This part is very interesting, because it can get really ugly(!). During my career I have found so many bad automation agent jobs. Client had an agent job where he was dropping the clean buffers every hour Client using database mail to send regular emails instead of necessary alert related emails The best one – A client used new Missing Index and Unused Index scripts in SQL Agent Job to follow suggestions 100%. Believe me, I have never seen such a badly performing and hard to optimize database. (I ended up dropping all non-clustered indexes on the development server and ran production workload on the development server again, then configured with optimal indexes). Shrinking database is performance killer. It should never be automated. SQL SERVER – Shrinking Database is Bad – Increases Fragmentation – Reduces Performance The one I hate the most is AutoShrink Database. It has given me hard time in my career quite a few times. SQL SERVER – SHRINKDATABASE For Every Database in the SQL Server Automation is necessary but common sense is a must when creating automation. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Book Reviews: Art of Community and Eyetracking Web Usability

    - by ultan o'broin
    Holidays time offers a chance to catch up on some user experience and user assistance related material. So, two short book reviews (which I considered using my new Tumblr blog for. More about that another time) coming up. The Art of Community by Jono Bacon Excellent starting point for anyone wanting to get going in the community software (FLOSS, for example) space or understand how to set up, manage, and leverage the collective intelligence of communities for whatever ends. The book is a little too long in my opinion, and of course, usage of what Jono is recommending needs to be nuanced and adapted for enterprise applications space (hardly surprising there is a lot about Ubuntu, Lug Radio, and so on given Jono's interests). Shame there wasn't more information on international, non-English community considerations too. Still, some great ideas and insight into setting up and managing communities that I will leverage (watch out for the results on this blog, later in 2011). One section, on collaborative writing really jumped out. It reinforced the whole idea that to successful community initiatives are based on instigators knowing what makes the community tick in the first place. How about this for insight into user profiles for people who write community user assistance (OK then, "doc") and what tools they might use (in this case, we're talking about Jokosher): "Most people who write documentation for open source software projects would fall into the category of power user. They are technology enthusiasts who are not interested in the super-technical avenues of programming, but want to help out. Many of these people have good writing skills and a good knowledge of using the software, so the documentation fit is natural. With Jokosher we wanted to acknowledge this profile of user. As such, instead of focussing on complex text processing tools, we encouraged our documentation contributors to use a wiki." The book is available for free here, and well as being available from usual sources. Eyetracking Web Usability by Jakob Nielsen and Kara Prentice Another fine book by established experts. I have some field experience of eyetracking studies myself --in the user assistance for enterprise applications space--though Jakob and Kara concentrate on websites for their research here. I would caution how much about websites transfers easily to the applications space, especially enterprise applications, as claimed in the book too. However, Jakob and Kara do make the case very well that understanding design goals (for example, productivity improvement in the case of applications) and the context of the software use is critical. Executing a study using eyetracking technology requires that you know what you want to test, can set up realistic tasks for testing by representative testers, and then analyze the results. Be precise, as lots of data will be generated (I think the authors underplay the effort in analyzing data too). What I found disappointing was the lack of emphasis on eyetracking as only part of the usability solution. It's really for fine-tuning designs in my opinion, and should be used after other design reviews. I also wasn't that crazy about the level of disengagement between the qualitative and quantitative side of this kind of testing that the book indicated. I think it is useful to have testers verbalize their thoughts and for test engineers to prompt, intervene, or guide as necessary. More on cultural or international aspects to usability testing might have been included too (websites are available to everyone). To conclude, I enjoyed the book, took on board some key takeaways about methodologies and found the recommendations sensible and easy to follow (for example about Forms layouts). Applying enterprise applications requirements such as those relating to user profiles, design goals, and overall context of use in conjunction with what's in this book would be the way to go here. It also made me think of how interesting it would be to compare eyetracking findings between website and enterprise applications usage.

    Read the article

  • Improving WIF&rsquo;s Claims-based Authorization - Part 3 (Usage)

    - by Your DisplayName here!
    In the previous posts I showed off some of the additions I made to WIF’s authorization infrastructure. I now want to show some samples how I actually use these extensions. The following code snippets are from Thinktecture.IdentityServer on Codeplex. The following shows the MVC attribute on the WS-Federation controller: [ClaimsAuthorize(Constants.Actions.Issue, Constants.Resources.WSFederation)] public class WSFederationController : Controller or… [ClaimsAuthorize(Constants.Actions.Administration, Constants.Resources.RelyingParty)] public class RelyingPartiesAdminController : Controller In other places I used the imperative approach (e.g. the WRAP endpoint): if (!ClaimsAuthorize.CheckAccess(principal, Constants.Actions.Issue, Constants.Resources.WRAP)) {     Tracing.Error("User not authorized");     return new UnauthorizedResult("WRAP", true); } For the WCF WS-Trust endpoints I decided to use the per-request approach since the SOAP actions are well defined here. The corresponding authorization manager roughly looks like this: public class AuthorizationManager : ClaimsAuthorizationManager {     public override bool CheckAccess(AuthorizationContext context)     {         var action = context.Action.First();         var id = context.Principal.Identities.First();         // if application authorization request         if (action.ClaimType.Equals(ClaimsAuthorize.ActionType))         {             return AuthorizeCore(action, context.Resource, context.Principal.Identity as IClaimsIdentity);         }         // if ws-trust issue request         if (action.Value.Equals(WSTrust13Constants.Actions.Issue))         {             return AuthorizeTokenIssuance(new Collection<Claim> { new Claim(ClaimsAuthorize.ResourceType, Constants.Resources.WSTrust) }, id);         }         return base.CheckAccess(context);     } } You see that it is really easy now to distinguish between per-request and application authorization which makes the overall design much easier. HTH

    Read the article

  • What’s New from the Oracle Marketing Cloud at Oracle OpenWorld 2014

    - by Kathryn Perry
    A Guest Post by Laura Vogel, Director, Oracle Marketing Cloud Events (pictured left) Marketing—CX Central is your hub for all things Marketing related at OpenWorld in San Francisco, September 28-October 2, 2014. Learn how to personalize the modern marketing journey to improve customer loyalty. We’re hosting more than 60 breakout sessions, half of which will highlight customer success stories from marquee brands including Bizo, Comcast, Dell, Epson, John Deere, Lane Bryant, ReadyTalk and Shutterfly. Moscone West, Levels 2 and 3To learn more about how modern marketing works, visit Moscone West, levels 2 and 3, for exciting demos of each of the Oracle Marketing Cloud solutions (BlueKai, Compendium, Eloqua, Push I/O, and Responsys). You also can check out our stations for Vertical Marketing Best Practices, the Markie Awards, and more! CX Spotlight Sessions “Accelerating Big Profits in Big Data,” Jeff Tanner, Baylor University “Using Content Marketing to Impact Every Stage of the Buyer’s Journey,” Jennifer Agustin, Bizo “Expanding Your Marketing with Proven Testing and Optimization,” Brian Border, Shutterfly and Matthew Balthazor, Epson “Modern Marketing: The New Digital Dialogue,” Cory Treffiletti, Oracle A Special Marquee SessionDell’s Hayden Mugford will speak on "The Digital Ecosystem: Driving Experience Through Contact Engagement.” She will highlight how the organization built a digital ecosystem that supports a behaviorally driven, multivehicle nurturing campaign. The Dell 1:1 Global Marketing team worked with multiple partners to innovate integrations with Oracle Eloqua, Oracle Real-Time Decisions for real-time decision logic, and a content management system (CMS) that enables 100 percent customized e-mails. The program doubled average order values for nurtured contacts versus non-nurtured and tripled open and click-through rates versus push e-mail. It Wouldn’t Be an Oracle Marketing Cloud Event Without a Party!We’re hosting CX Central Fest: a unique customer experience specifically designed for attendees of CX Central. It will include a chance to rock out at a private concert featuring Los Angeles indie electronic pop group, Capital Cities! Join us Tuesday, September 30 from 7-9 p.m. Other Oracle Marketing Cloud Session Highlights Thought leadership by role Exploring the benefits of moving to the Cloud Product line roadmaps and innovations in Marketing Technical deep dives for product lines within Marketing Best practices and impactful business measurements Solutions that are integrated across CX Target AudienceSession content is geared toward professionals in Marketing, Marketing Operations, Marketing Demand Generation, Social: Chief Marketing Officers, Vice Presidents, Directors and Managers. OutcomesCustomers attending Marketing—CX Central @ OpenWorld will be able to: Gain insight into delivering consistent cross-channel marketing Discover how to provide the right information to the right customer at the right time and with the right channel Get answers to burning questions and advice on business challenges Hear from other Oracle customers about recommended best practices to help their organization move forward Network and share ideas to help create a strategy for connecting with customers in better ways Resources At a Glance Register Now Track Site—View Marketing Sessions 72 1024x768 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Focus on Session Doc Downloadable Justification Email OpenWorld is a fabulous way for you to see all that Oracle Marketing Cloud has to offer. Register today.

    Read the article

  • How to create Adhoc workflow in UCM

    - by vijaykumar.yenne
    UCM has an inbuilt workflow engine that can handle document centric workflow approval/rejection process to ensure the right set of assets go into the repository. Anybody who has gone through the documentation is aware that there are two types of work flows that can be defined using the Workflow Admin applet in UCM namely Criteria and Basic While criteria is an Automatic workflow  process based on certain metadata attributes (Security Group and One of the Metadata Fields) , basic workflow is a manual workflow that need to be initiated by the admin. Any workflow  that can be put on the white board can be translated into the UCM wokflow process and there are concepts like sub workflows, tokens, events. idoc scripting that be introduced to handle any kind of complex workflows. There is a specific Workflow Implementation guide that explains the concepts in detail. One of the standard queries i come across is how to handle adhoc workflows where at the time of contributing the content, the contributors would like to decide on the workflow to be initiated and the users to be picked for approval in each step, hence this post.This is what i want to acheive, i would like to display on my Checkin Screen on the kind of workflows that a contributor could choose from:Based on the Workflow the contributor chooses, the other metadata fields (Step One, Step Two and Step Three)  need to be filled in and these fields decide who the approvers are going to be.1. Create a criteria workflow called One_Step_Review2.create two tokens StepOne <$wfAddUser(xWorkflowStepOne, "user")$>,  OrginalAuthor  <$wfAddUser(wfGet("OriginalAuthor"), "user")$>View image3.create two steps in the work flow created (One_Step_Review)View image4. Edit Step1 of the Workflow and add the Step One token and select the review permissionView image5. In the exit conditions tab have atleast One reveiwerView image6. In the events tab add an entry event <$wfSet("OriginalAuthor",dDocAuthor)$> to capture the contributor who shall be notified in the second step of the workflowView image7. Add the second step Notify_Author to the workflow8. Add the original author token to the above step9.  Enable the workflow10. Open the configration manager applet and create a Metadata field Workflow with option list enabled and add the list of values as show hereView image11. Create another metadata field WorkflowStepOne with option list configured to the Users View. This shall display all the users registered with UCM, which when selected shall be associated with the tokens associated with the workflow. Refer the above token.View imageAs indicated in the above steps you could create multiple work flows and associate the custom metadata field values to the tokens so that the contributors can decide who can approve their  content.

    Read the article

  • Home Energy Management & Automation with Windows Phone 7

    A number of people at Clarity are personally interested in home energy conservation and home automation. We feel that a mobile device is a great fit for bringing this idea to fruition. While this project is merely a concept and not directly associated with Microsofts Hohm web service, it provides a great model for communicating the concept. I wanted to take the idea a step further and combine saving energy in your home with the ability to track water usage and control your home devices. I designed an application that focuses on total home control and not just energy usage. Application Overview By monitoring home consumption in real time and with yearly projections users can pinpoint vampire devices, times of high or low consumption, and wasteful patterns of energy use. Energy usage meters indicate total current consumption as well as individual device consumption. Users can then use the information to take action, make adjustments, and change their consumption behaviors. The app can be used to automate certain systems like lighting, temperature, or alarms. Other features can be turned on an off at the touch of a toggle switch on your phone, away from home. Forget to turn off the TV or shut the garage door? No problem, you can do it from your phone. Through settings you can enable and disable features of the phone that apply to your home making it a completely customized and convenient experience. To be clear, this equates to more security, big environmental impact, and even bigger savings.   Design and User Interface  Since this panorama application is designed for win phone 7 devices, it complies with the UI Design and Interaction Guide for wp7. I developed the frame and page hierarchy from existing examples. The interface takes advantage of the interactive nature of touch screens with slider controls, pivot control views, and toggle switches to turn on and off devices (not shown in mockup). I followed recommendations for text based elements and adapted the tile notifications to display the most recent user activity. For example, the mockup indicates upon launching the app that the last thing you did was program the thermostat. This model is great for quick launching common user actions. One last design feature to point out is the technical reasons for supplying both light and dark themes for the app. Since this application is targeting energy consumption it only makes sense to consider the effect of the apps background color or image on the phones energy use. When displaying darker colors like black the OLED display may use less power, extending battery life. Other Considerations For now I left out options of wind and solar powered energy options because they are not available to everyone. Renewable energy sources and new technologies associated with them are definitely ideas to keep in mind for a next iteration. Another idea to explore for such an application would be to include a savings model similar to mint.com. In addition to general energy-saving recommendations the application could recommend customized ways to save based on your current utility providers and available options in your area. If your television or refrigerator is guilty of sucking a lot of energy then you may see recommendations for energy star products that could save you even more money! Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Chalk Talk with John: Business Value of Identity and Access Management

    - by John Brunswick
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Conveying the business value of Identity and Access Management to non technologists can potentially be challenging, especially considering the breadth capability supplied by these technologies. In this episode of Chalk Talk with John, Bob at Codeaway Valley asks Jim from Middleware Fields how they are able to manage access to buildings and facilities throughout their community. Bob and his team struggle to keep up with the needs of their community members, while ensuring the community’s safety. Jim shares his creative solution to simplifying the management of access throughout their community in Middleware Fields. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} About me: Hi, I am John Brunswick, an Oracle Enterprise Architect. As an Oracle Enterprise Architect, I focus on the alignment of technical capabilities in support of business vision and objectives, as well as the overall business value of technology.  Before coming to Oracle, I was a Practice Manager within BEA System's Business Interaction Division consulting organization, orchestrating enterprise systems in support of line of business goals. Follow me on Twitter and visit my site for Oracle Fusion Middleware related tips.

    Read the article

  • List of all states from COMPOSITE_INSTANCE, CUBE_INSTANCE, DLV_MESSAGE tables

    - by Deepak Arora
    In many of my engagements I get asked repeatedly about the states of the composites in 11g and how to decipher them, especially when we are troubleshooting issues around purging. I have compiled a list of all the states from the COMPOSITE_INSTANCE, CUBE_INSTANCE, and DLV_MESSAGE tables. These are the primary tables that are used when using BPEL composites and how they are used with the ECID.  Composite State Values COMPOSITE_INSTANCE States State Description 0 Running 1 Completed 2 Running with faults 3 Completed with faults 4 Running with recovery required 5 Completed with recovery required 6 Running with faults and recovery required 7 Completed with faults and recovery required 8 Running with suspended 9 Completed with suspended 10 Running with faults and suspended 11 Completed with faults and suspended 12 Running with recovery required and suspended 13 Completed with recovery required and suspended 14 Running with faults, recovery required, and suspended 15 Completed with faults, recovery required, and suspended 16 Running with terminated 17 Completed with terminated 18 Running with faults and terminated 19 Completed with faults and terminated 20 Running with recovery required and terminated 21 Completed with recovery required and terminated 22 Running with faults, recovery required, and terminated 23 Completed with faults, recovery required, and terminated 24 Running with suspended and terminated 25 Completed with suspended and terminated 26 Running with faulted, suspended, and terminated 27 Completed with faulted, suspended, and terminated 28 Running with recovery required, suspended, and terminated 29 Completed with recovery required, suspended, and terminated 30 Running with faulted, recovery required, suspended, and terminated 31 Completed with faulted, recovery required, suspended, and terminated 32 Unknown 64 - CUBE_INSTANCE States State Description 0 STATE_INITIATED 1 STATE_OPEN_RUNNING 2 STATE_OPEN_SUSPENDED 3 STATE_OPEN_FAULTED 4 STATE_CLOSED_PENDING_CANCEL 5 STATE_CLOSED_COMPLETED 6 STATE_CLOSED_FAULTED 7 STATE_CLOSED_CANCELLED 8 STATE_CLOSED_ABORTED 9 STATE_CLOSED_STALE 10 STATE_CLOSED_ROLLED_BACK DLV_MESSAGE States State Description 0 STATE_UNRESOLVED 1 STATE_RESOLVED 2 STATE_HANDLED 3 STATE_CANCELLED 4 STATE_MAX_RECOVERED Since now in 11g the Invoke_Messages table is not there so to distinguish between a new message (Invoke) and callback (DLV) and there is DLV_TYPE column that defines the type of message: DLV_TYPE States State Description 1 Invoke Message 2 DLV Message MEDIATOR_INSTANCE STATE Description  0  No faults but there still might be running instances  1  At least one case is aborted by user  2  At least one case is faulted (non-recoverable)  3  At least one case is faulted and one case is aborted  4  At least one case is in recovery required state  5 At least one case is in recovery required state and at least one is aborted  6 At least one case is in recovery required state and at least one is faulted  7 At least one case is in recovery required state, one faulted and one aborted  >=8 and < 16  Running >= 16   Stale In my next blog posting I will walk through the lifecycle of a BPEL process using the above states for the following use cases: - New BPEL process - initial Receive activity - Callback BPEL process - mid-level Receive activity As always comments and questions welcome! Deepak

    Read the article

  • The dislikes of TDD

    - by andrewstopford
    I enjoy debates about TDD and Brian Harrys blog post is no exception. Brian sounds out what he likes and dislikes about TDD and it's the dislikes I'll focus on. The idea of having unit tests that cover virtually every line of code that I’ve written that I have to refactor every time I refactor my code makes me shudder.  Doing this way makes me take nearly twice as long as it would otherwise take and I don’t feel like I get sufficient benefits from it. Refactoring your tests to match your refactored code sounds like the tests are suffering. Too many hard dependencies with no SOLID concerns are a sure fire reason you would do this. Maybe at the start of a TDD cycle you would need to do this as your design evolves and you remove these dependencies but this should quickly be resolved as you refactor. If you find your self still doing it then stop and look back at your design. Don’t get me wrong, I’m a big fan of unit tests.  I just prefer to write them after the code has stopped shaking a bit.  In fact most of my early testing is “manual”.  Either I write a small UI on top of my service that allows me to plug in values and try it or write some quick API tests that I throw away as soon as I have validated them. The problem with this is that a UI can make assumptions on your code that then just unit test around and very quickly the design becomes bad and you technical debt sweeps in. If you want to blackbox test your code with a UI then do so after your TDD cycles not before. This is probably by biggest issue with a literal TDD interpretation.  TDD says you never write a line of code without a failing test to show you need it.  I find it leads developers down a dangerous path.  Without any help from a methodology, I have met way too many developers in my life that “back into a solution”.  By this, I mean they write something, it mostly works and they discover a new requirement so they tack it on, and another and another and when they are done, they’ve got a monstrosity of special cases each designed to handle one specific scenario.  There’s way more code than there should be and it’s way too complicated to understand. I believe in finding general solutions to problems from which all the special cases naturally derive rather than building a solution of special cases.  In my mind, to do this, you have to start by conceptualizing and coding the framework of the general algorithm.  For me, that’s a relatively monolithic exercise. TDD is an development pratice not a methodology, the danger is that the solution becomes a mass of different things that violate DRY. TDD won't solve these problems, only good communication and practices like pairing will help. Above all else an assumption that TDD replaces a methodology is a mistake, combine it with what ever works for your team\business but only good communication will help. A good naming scheme\structure for folders, files and tests can help you and your team isolate what tests are for what.

    Read the article

  • Setting up SharePoint without Active Directory

    - by eJugnoo
    In order to setup SharePoint without AD, you need to run following PowerShell command on Management Shell after installing SharePoint on your server, but before running Config Wizard: (we don’t want to run this SP farm in stand-alone mode!) 1. New-SPConfigurationDatabase SYNOPSIS     Creates a new configuration database. SYNTAX     New-SPConfigurationDatabase [-DatabaseName] <String> [-DatabaseServer] <String> [[-DirectoryDomain] <String>] [[-DirectoryOrganizationUnit] <String>]     [[-AdministrationContentDatabaseName] <String>] [[-DatabaseCredentials] <PSCredential>] [-FarmCredentials] <PSCredential> [-Passphrase] <SecureString>      [-AssignmentCollection <SPAssignmentCollection>] [<CommonParameters>] DESCRIPTION     The New-SPConfigurationDatabase cmdlet creates a new configuration database on the specified database server. This is the central database for a new SharePoint farm.     For permissions and the most current information about Windows PowerShell for SharePoint Products, see the online documentation (http://go.microsoft.com/fwlink/?LinkId=163185). RELATED LINKS     Backup-SPConfigurationDatabase     Disconnect-SPConfigurationDatabase     Connect-SPConfigurationDatabase     Remove-SPConfigurationDatabase REMARKS     To see the examples, type: "get-help New-SPConfigurationDatabase -examples".     For more information, type: "get-help New-SPConfigurationDatabase -detailed".     For technical information, type: "get-help New-SPConfigurationDatabase -full". NOTE: Use –AdministrationContentDatabaseName switch to pass the name of Admin database you want instead of GUID-based name it automatically creates. Hence, one can pretty much easily control Admin, Config, and Content database names at the time of farm creation. If creating new farm, you can also delete and re-provision any service databases automatically created, from UI, to decide what database names you want. 2. Run SharePoint Configuration Wizard, and you’ll following as already added to farm. Select do not discconect from farm, and proceed… Select the port, and authentication (NTLM in my case). Click next, and wizard will complete the remaining steps of provisioning, including creation of Central Admin Web App on the desired port. Once successful, it will open the Central Admin site and ask you to run Farm Config Wizard. I chose to skip and do things manually, to remain in control of what is happening on the farm. Like creating web-app for site collections, creating the very first site collection, and any other service applications. I needed this to create a public-facing installation of SharePoint Foundation RTM on a server which didn’t have AD. Now I am going to setup FBA, and possibly Live ID Auth as well. I will be also setting up RBS, and multi-tenancy on this farm ,and would post any notes, and findings here… --Sharad

    Read the article

  • Truly understand the threshold for document set in document library in SharePoint

    - by ybbest
    Recently, I am working on an issue with threshold. The problem is that when the user navigates to a view of the document library, it displays the error message “list view threshold is exceeded”. However, in the view, it has no data. The list view threshold limit is 5000 by default for the non-admin user. This limit is not the number of items returned by your query; it is the total number of items the database needs to read to calculate the returned result set. So although the view does not return any result but to calculate the result (no data to show), it needs to access more than 5000 items in the database. To fix the issue, you need to create an index for the column that you use in the filter for the view. Let’s look at the problem in details. You can download a solution to replicate this issue here. 1. Go to Central Admin ==> Web Application Management ==>General Settings==> Click on Resource Throttling 2. Change the list view threshold in web application from 5000 to 2000 so that I can show the problem without loading more than 5000 items into the list. FROM TO 3. Go to the page that displays the approved view of the Loan application document set. It displays the message as shown below although I do not have any data returned for this view. 4. To get around this, you need to create an index column. Go to list settings and click on the Index columns. 5. Click on the “Create a new index” link. 6. Select the LoanStatus field as I use this filed as the filter to create the view. 7. After the index is created now I can access the approved view, as you can see it does not return any data. Notes: List View Threshold: Specify the maximum number of items that a database operation can involve at one time. Operations that exceed this limit are prohibited. References: SharePoint lists V: Techniques for managing large lists Manage large SharePoint lists for better performance http://blogs.technet.com/b/speschka/archive/2009/10/27/working-with-large-lists-in-sharepoint-2010-list-throttling.aspx

    Read the article

  • Scrum for Team Foundation Server 2010

    - by Martin Hinshelwood
    I will be presenting a session on “Scrum for TFS2010” not once, but twice! If you are going to be at the Aberdeen Partner Group meeting on 27th April, or DDD Scotland on 8th May then you may be able to catch my session. Credit: I want to give special thanks to Aaron Bjork from Microsoft who provided me with most of my material He is a Scrum and Power Point genius. Scrum for Team Foundation Server 2010 Synopsis Visual Studio ALM (formerly Visual Studio Team System (VSTS)) and Team Foundation Server (TFS) are the cornerstones of development on the Microsoft .NET platform. These are the best tools for a team to have successful projects and for the developers to have a focused and smooth software development process. For TFS 2010 Microsoft is heavily investing in Scrum and has already started moving some teams across to using it. Martin will not be going in depth with Scrum but you can find out more about Scrum by reading the Scrum Guide and you can even asses your Scrum knowledge by having a go at the Scrum Open Assessment. Come and see Martin Hinshelwood, Visual Studio ALM MVP and Solution Architect from SSW show you: How to successfully gather requirements with User stories How to plan a project using TFS 2010 and Scrum How to work with a product backlog in TFS 2010 The right way to plan a sprint with TFS 2010 Tracking your progress The right way to use work items What you can use from the built in reporting as well as the Project portals available on from the SharePoint dashboard The important reports to give your Product Owner / Project Manager Walk away knowing how to see the project health and progress. Visual Studio ALM is designed to help address many of these traditional problems faced by teams. It does so by providing a set of integrated tools to help teams improve their software development activities and to help managers better support the software development processes. During this session we will cover the lifecycle of creating work items and how this fits into Scrum using Visual Studio ALM and Team Foundation Server. If you want to know more about how to do Scrum with TFS then there is a new course that has been created in collaboration with Microsoft and Scrum.org that is going to be the official course for working with TFS 2010. SSW has Professional Scrum Developer Trainers who specialise in training your developers in implementing Scrum with Microsoft's Visual Studio ALM tools. Ken Schwaber and and Sam Guckenheimer: Professional Scrum Development Technorati Tags: Scrum,VS ALM,VS 2010,TFS 2010

    Read the article

  • Control Your Favorite Music Player from Firefox

    - by Asian Angel
    Do you love listening to music while you browse? Now you can access and control your favorite music player directly from Firefox with the FoxyTunes extension. FoxyTunes in Action Once you have installed the extension and restarted Firefox you will see the FoxyTunes Toolbar located in the “Status Bar”. The default media app is Windows Media Player but can be easily changed. Here are the buttons/items available with the default settings: Search, FoxyTunes Main Menu, Show Player, Select Player, Previous Track, Play, Next Track, Mute On/Off, Volume, Play File, Twitty Tunes, Foxy Tunes Search/Explore, Open FoxyTunes Planet, & Toggle Visibility/Drag and drop to move. Note: You can hide or show individual buttons/items using the “FoxyTunes Menus”. Curious about the media players that FoxyTunes works with? Here is a complete listing…that definitely looks terrific! Notice that the currently selected media app is “bold and blue”. For our example we chose Spotify which we have previously covered. Keep in mind that you may or may not need to have your favorite media app open prior to “starting” FoxyTunes up (i.e. Play Button). Here is a good look at the “FoxyTunes Main Menu” and “Controls Sub-Menu”. The “Extras Menu”…if you click on skins you will be taken to the FoxyTunes Skins webpage. Here is a closer look into the “Configurations Menu” and one of the sub-menus. You do not need to look for options in the “Add-ons Manager Window”…everything you need is contained in these menus. If you do not like having FoxyTunes in the “Status Bar” you can easily drag and drop it to another toolbar. You can also condense the appearance of FoxyTunes using the small “triangle buttons” that are located in different spots throughout the “FoxyTunes Toolbar”. With just a click or two you can greatly reduce its’ impact on your UI. Conclusion If you love listening to music while browsing then the FoxyTunes extension will let you take care of everything right from your browser. Links Download the FoxyTunes extension (Mozilla Add-ons) Download the FoxyTunes extension (Extension Homepage) *Note: FoxyTunes add-ins for Internet Explorer and Yahoo! Messenger available here. Similar Articles Productive Geek Tips Fixing When Windows Media Player Library Won’t Let You Add Files5 Awesome Music Desktop Gadgets for Vista and Windows 7Make Windows Media Player Automatically Open in Mini Player ModeSearch for Install Packages from the Ubuntu Command LineInstalling Windows Media Player Plugin for Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job? Find Downloads and Add-ins for Outlook Recycle ! Find That Elusive Icon with FindIcons

    Read the article

  • Windows Azure Use Case: Infrastructure Limits

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Physical hardware components take up room, use electricity, create heat and therefore need cooling, and require wiring and special storage units. all of these requirements cost money to rent at a data-center or to build out at a local facility. In some cases, this can be a catalyst for evaluating options to remove this infrastructure requirement entirely by moving to a distributed computing environment. Implementation: There are three main options for moving to a distributed computing environment. Infrastructure as a Service (IaaS) The first option is simply to virtualize the current hardware and move the VM’s to a provider. You can do this with Microsoft’s Hyper-V product or other software, build the systems and host them locally on fewer physical machines. This is a good option for canned-applications (where you have to type setup.exe) but not as useful for custom applications, as you still have to license and patch those servers, and there are hard limits on the VM sizes. Software as a Service (SaaS) If there is already software available that does what you need, it may make sense to simply purchase not only the software license but the use of it on the vendor’s servers. Microsoft’s Exchange Online is an example of simply using an offering from a vendor on their servers. If you do not need a great deal of customization, have no interest in owning or extending the source code, and need to implement a solution quickly, this is a good choice. Platform as a Service (PaaS) If you do need to write software for your environment, your next choice is a Platform as a Service such as Windows Azure. In this case you no longer manager physical or even virtual servers. You start at the code and data level of control and responsibility, and your focus is more on the design and maintenance of the application itself. In this case you own the source code and can extend or change it as you see fit. An interesting side-benefit to using Windows Azure as a PaaS is that the Application Fabric component allows a hybrid approach, which gives you a basis to allow on-premise applications to leverage distributed computing paradigms. No one solution fits every situation. It’s common to see organizations pick a mixture of on-premise, IaaS, SaaS and PaaS components. In fact, that’s a great advantage to this form of computing - choice. References: 5 Enterprise steps for adopting a Platform as a Service: http://blogs.msdn.com/b/davidmcg/archive/2010/12/02/5-enterprise-steps-for-adopting-a-platform-as-a-service.aspx?wa=wsignin1.0  Application Patterns for the Cloud: http://blogs.msdn.com/b/kashif/archive/2010/08/07/application-patterns-for-the-cloud.aspx

    Read the article

  • The New Social Developer Community: a Q&A

    - by Mike Stiles
    In our last blog, we introduced the opportunities that lie ahead for social developers as social applications reach across every aspect and function of the enterprise. Leading the upcoming JavaOne Social Developer Program October 2 at the San Francisco Hilton is Roland Smart, VP of Social Marketing at Oracle. I got to ask Roland a few of the questions an existing or budding social developer might want to know as social extends beyond interacting with friends and marketing and into the enterprise. Why is it smart for developers to specialize as social developers? What opportunities lie in the immediate future that’s making this a critical, in-demand position? Social has changed the way we interact with brands and with each other across the web. As we acclimate to a new social paradigm we also look to extend its benefits into new areas of our lives. The workplace is a logical next step, and we're starting to see social interactions more and more in this context. But unlocking the value of social interactions requires technical expertise and knowledge of developing social apps that tap into the social graph. Developers focused on integrating social experiences into enterprise applications must be familiar with popular social APIs and must understand how to build enterprise social graphs of their own. These developers are part of an emerging community of social developers and are key to socially enabling the enterprise. Facebook rebranded their Preferred Developer Consultant Group (PDC) and the Preferred Marketing Developers (PMD) to underscore the fact developers are required inside marketing organizations to unlock the full potential of their platform. While this trend is starting on the marketing side with marketing developers, this is just an extension of the social developer concept that will ultimately drive social across the enterprise. What are some of the various ways social will be making its way into every area of enterprise organizations? How will it be utilized and what kinds of applications are going to be needed to facilitate and maximize these changes? Check out Oracle’s vision for the social-enabled enterprise. It’s a high-level overview of how social will impact across the enterprise. For example: HR can leverage social in recruiting and retentionSales can leverage social as a prospecting toolMarketing can use social to gain market insightCustomer support can use social to leverage community support to improve customer satisfaction while reducing service costOperations can leverage social improve systems That’s only the beginning. Once sleeves get rolled up and social developers and innovators get to work, still more social functions will no doubt emerge. What makes Java one of, if not the most viable platform on which to build these new enterprise social applications? Java is certainly one of the best platforms on which to build social experiences because there’s such a large existing community of Java developers. This means you can affordably recruit talent, and it's possible to effectively solicit advice from the community through various means, including our new Social Developer Community. Beyond that, there are already some great proof points Java is the best platform for creating social experiences at scale. Consider LinkedIn and Twitter. Tell us more about the benefits of collaboration and more about what the Oracle Social Developer Community is. What opportunities does that offer up and what are some of the ways developers can actively participate in and benefit from that community? Much has been written about the overall benefits of collaborating with other developers. Those include an opportunity to introduce yourself to the community of social developers, foster a reputation, establish an expertise, contribute to the advancement of the space, get feedback, experiment with the latest concepts, and gain inspiration. In short, collaboration is a tool that must be applied properly within a framework to get the most value out of it. The OSDC is a place where social developers can congregate to discuss the opportunities/challenges of building social integrations into their applications. What “needs” will this community have? We don't know yet. But we wanted to create a forum where we can engage and understand what social developers are thinking about, excited about, struggling with, etc. The OSDL can then step in if we can help remove barriers and add value in a serious and committed way so Oracle can help drive practice development.

    Read the article

  • NetBeans Podcast 69

    - by TinuA
    Podcast Guests: Terrence Barr, Simon Ritter, Jaroslav Tulach (It's an all-Oracle lineup!) Download mp3: 47 Minutes – 39.5 mb Subscribe on iTunes NetBeans Community News with Geertjan and Tinu If you missed the first two Java Virtual Developer Day events in early May, there's still one more LIVE training left on May 28th. Sign up here to participate live in the APAC time zone or watch later ON DEMAND. Video: Get started with Vaadin development using NetBeans IDE NetBeans IDE was at JavaCro 2014 and at Hippo Get-together 2014 Another great lineup is in the works for NetBeans Day at JavaOne 2014. More details coming soon! NetBeans' Facebook page is almost at 40,000 Likes! Help us crack that milestone in the next few weeks! Other great ways to stay updated about NetBeans? Twitter and Google+. 09:28 / Terrence Barr - What to Know about Java Embedded Terrence Barr, a Senior Technologist and Principal Product Manager for Embedded and Mobile technologies at Oracle, discusses new features of the Java SE Embedded and Java ME Embedded platforms, and sheds some light on the differences between them and what they have to offer to developers. Learn more about Java SE Embedded Tutorial: Using Oracle Java SE Embedded Support in NetBeans IDE Learn more about Java ME Embedded Video: NetBeans IDE Support for Java ME 8 Video: Installing and Using Java ME SDK 8.0 Plugins in NetBeans IDE Follow Terrence Barr to keep up with news in the Embedded space: Blog and Twitter 26:02 / Simon Ritter - A Massive Serving of Raspberry Pi Oracle's Raspberry Pi virtual course is back by popular demand! Simon Ritter, the head of Oracle's Java Technology Evangelism team, chats about the second run of the free Java Embedded course (starting May 30th), what participants can expect to learn, NetBeans' support for Java ME development, and other Java trainings coming to a desktop, laptop or user group near you. Sign up for the Oracle MOOC: Develop Java Embedded Applications Using Raspberry Pi Find out when Simon Ritter and the Java Evangelism team are coming to a Java event or JUG in your area--follow them on Twitter: Simon Ritter Angela Caicedo Steven Chin Jim Weaver 36:58 / Jaroslav Tulach - A Perfect Translation Jaroslav Tulach returns to the NetBeans podcast with tales about the Japanese translation of the Practical API Design book, which he contends surpasses all previous translations, including the English edition! Order "Practical API Design" (Japanese Version)  Find out why the Japanese translation is the best edition yet *Have ideas for NetBeans Podcast topics? Send them to ">nbpodcast at netbeans dot org. *Subscribe to the official NetBeans page on Facebook! Check us out as well on Twitter, YouTube, and Google+.

    Read the article

  • From Trailer to Cloud: Skire acquisition expands Oracle’s on-demand project management options.

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} By Alison Weiss Whether building petrochemical facilities in the Middle East or managing mining operations in Australia, project managers face significant challenges. Local regulations and currencies, contingent labor, hybrid public/private funding sources, and more threaten project budgets and schedules. According to Mike Sicilia, senior vice president and general manager for the Oracle Primavera Global Business Unit, there will be trillions of dollars invested in industrial projects around the globe between 2012 and 2016. But even with so much at stake, project leads don’t always have time to look for new and better enterprise project portfolio management (EPPM) software solutions to manage large-scale capital initiatives across the enterprise. Oracle’s recent acquisition of Skire, a leading provider of capital program management and facilities management applications available both in the cloud and on premises, gives customers outstanding new EPPM options. By combining Skire’s cloud-based solutions for managing capital projects, real estate, and facilities with Oracle’s Primavera EPPM solutions, project managers can quickly get a solution running that is interoperable across an extended enterprise. Staff can access the EPPM solution within days, rather than waiting for corporate IT to put technology in place. “Staff can access the EPPM solution within days, rather than waiting for corporate IT to put technology in place,” says Sicilia. This applies to a problem that has, according to Sicilia, bedeviled project managers for decades: extending EPPM functionality into the field. Frequently, large-scale projects are remotely located, and the lack of communications and IT infrastructure threatened the accuracy of project reporting and scheduling. Read the full version of this article in the November 2012 edition of Oracle's Profit Magazine: Special Report on Project Management

    Read the article

  • SQL SERVER – Online Index Rebuilding Index Improvement in SQL Server 2012

    - by pinaldave
    Have you ever faced situation when you see something working and you feel it should not be working? Well, I had similar moments few days ago. I know that SQL Server 2008 supports online indexing. However, I also know that I cannot rebuild index ONLINE if I have used VARCHAR(MAX), NVARCHAR(MAX) or few other data types. While I held my belief very strongly I came across situation, where I had to go online and do little bit reading from Book Online. Here is the similar example. First of all – run following code in SQL Server 2008 or SQL Server 2008 R2. USE TempDB GO CREATE TABLE TestTable (ID INT, FirstCol NVARCHAR(10), SecondCol NVARCHAR(MAX)) GO CREATE CLUSTERED INDEX [IX_TestTable] ON TestTable (ID) GO CREATE NONCLUSTERED INDEX [IX_TestTable_Cols] ON TestTable (FirstCol) INCLUDE (SecondCol) GO USE [tempdb] GO ALTER INDEX [IX_TestTable_Cols] ON [dbo].[TestTable] REBUILD WITH (ONLINE = ON) GO DROP TABLE TestTable GO Now run the same code in SQL Server 2012 version. Observe the difference between both of the execution. You will be get following resultset. In SQL Server 2008/R2 it will throw following error: Msg 2725, Level 16, State 2, Line 1 An online operation cannot be performed for index ‘IX_TestTable_Cols’ because the index contains column ‘SecondCol’ of data type text, ntext, image, varchar(max), nvarchar(max), varbinary(max), xml, or large CLR type. For a non-clustered index, the column could be an include column of the index. For a clustered index, the column could be any column of the table. If DROP_EXISTING is used, the column could be part of a new or old index. The operation must be performed offline. In SQL Server 2012 it will run successfully and will not throw any error. Command(s) completed successfully. I always thought it will throw an error if there is VARCHAR(MAX) or NVARCHAR(MAX) used in table schema definition. When I saw this result it was clear to me that it will be for sure not bug enhancement in SQL Server 2012. For matter for the fact, I always wanted this feature to be added in SQL Server Engine as this will enable ONLINE Index Rebuilding for mission critical tables which needs to be always online. I quickly searched online and landed on Jacob Sebastian’s blog where he has blogged about it as well. Well, is there any other new feature in SQL Server 2012 which gave you good surprise? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Asp.net session on browser close

    - by budugu
    Note: Cross posted from Vijay Kodali's Blog. Permalink How to capture logoff time when user closes browser? Or How to end user session when browser closed? These are some of the frequently asked questions in asp.net forums. In this post I'll show you how to do this when you're building an ASP.NET web application. Before we start, one fact: There is no full-proof technique to catch the browser close event for 100% of time. The trouble lies in the stateless nature of HTTP. The Web server is out of the picture as soon as it finishes sending the page content to the client. After that, all you can rely on is a client side script. Unfortunately, there is no reliable client side event for browser close. Solution: The first thing you need to do is create the web service. I've added web service and named it AsynchronousSave.asmx.    Make this web service accessible from Script, by setting class qualified with the ScriptServiceAttribute attribute...  Add a method (SaveLogOffTime) marked with [WebMethod] attribute. This method simply accepts UserId as a string variable and writes that value and logoff time to text file. But you can pass as many variables as required. You can then use this information for many purposes. To end user session, you can just call Session.Abandon() in the above web method. To enable web service to be called from page’s client side code, add script manager to page. Here i am adding to SessionTest.aspx page When the user closes the browser, onbeforeunload event fires on the client side. Our final step is adding a java script function to that event, which makes web service calls. The code is simple but effective My Code HTML:( SessionTest.aspx ) C#:( SessionTest.aspx.cs ) That’s’ it. Run the application and after browser close, open the text file to see the log off time. The above code works well in IE 7/8. If you have any questions, leave a comment.

    Read the article

  • Asp.net session on browser close

    - by budugu
    Note: Cross posted from Vijay Kodali's Blog. Permalink How to capture logoff time when user closes browser? Or How to end user session when browser closed? These are some of the frequently asked questions in asp.net forums. In this post I'll show you how to do this when you're building an ASP.NET web application. Before we start, one fact: There is no full-proof technique to catch the browser close event for 100% of time. The trouble lies in the stateless nature of HTTP. The Web server is out of the picture as soon as it finishes sending the page content to the client. After that, all you can rely on is a client side script. Unfortunately, there is no reliable client side event for browser close. Solution: The first thing you need to do is create the web service. I've added web service and named it AsynchronousSave.asmx.    Make this web service accessible from Script, by setting class qualified with the ScriptServiceAttribute attribute...  Add a method (SaveLogOffTime) marked with [WebMethod] attribute. This method simply accepts UserId as a string variable and writes that value and logoff time to text file. But you can pass as many variables as required. You can then use this information for many purposes. To end user session, you can just call Session.Abandon() in the above web method. To enable web service to be called from page’s client side code, add script manager to page. Here i am adding to SessionTest.aspx page When the user closes the browser, onbeforeunload event fires on the client side. Our final step is adding a java script function to that event, which makes web service calls. The code is simple but effective My Code HTML:( SessionTest.aspx ) C#:( SessionTest.aspx.cs ) That’s’ it. Run the application and after browser close, open the text file to see the log off time. The above code works well in IE 7/8. If you have any questions, leave a comment.

    Read the article

  • "exception at 0x53C227FF (msvcr110d.dll)" with SOIL library

    - by Sean M.
    I'm creating a game in C++ using OpenGL, and decided to go with the SOIL library for image loading, as I have used it in the past to great effect. The problem is, in my newest game, trying to load an image with SOIL throws the following runtime error: This error points to this part: // SOIL.c int query_NPOT_capability( void ) { /* check for the capability */ if( has_NPOT_capability == SOIL_CAPABILITY_UNKNOWN ) { /* we haven't yet checked for the capability, do so */ if( (NULL == strstr( (char const*)glGetString( GL_EXTENSIONS ), "GL_ARB_texture_non_power_of_two" ) ) ) //############ it points here ############// { /* not there, flag the failure */ has_NPOT_capability = SOIL_CAPABILITY_NONE; } else { /* it's there! */ has_NPOT_capability = SOIL_CAPABILITY_PRESENT; } } /* let the user know if we can do non-power-of-two textures or not */ return has_NPOT_capability; } Since it points to the line where SOIL tries to access the OpenGL extensions, I think that for some reason SOIL is trying to load the texture before an OpenGL context is created. The problem is, I've gone through the entire solution, and there is only one place where SOIL has to load a texture, and it happens long after the OpenGL context is created. This is the part where it loads the texture... //Init glfw if (!glfwInit()) { fprintf(stderr, "GLFW Initialization has failed!\n"); exit(EXIT_FAILURE); } printf("GLFW Initialized.\n"); //Process the command line arguments processCmdArgs(argc, argv); //Create the window glfwWindowHint(GLFW_SAMPLES, g_aaSamples); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2); g_mainWindow = glfwCreateWindow(g_screenWidth, g_screenHeight, "Voxel Shipyard", g_fullScreen ? glfwGetPrimaryMonitor() : nullptr, nullptr); if (!g_mainWindow) { fprintf(stderr, "Could not create GLFW window!\n"); closeOGL(); exit(EXIT_FAILURE); } glfwMakeContextCurrent(g_mainWindow); printf("Window and OpenGL rendering context created.\n"); //Create the internal rendering components prepareScreen(); //Init glew glewExperimental = GL_TRUE; int err = glewInit(); if (err != GLEW_OK) { fprintf(stderr, "GLEW initialization failed!\n"); fprintf(stderr, "%s\n", glewGetErrorString(err)); closeOGL(); exit(EXIT_FAILURE); } printf("GLEW initialized.\n"); <-- Sucessfully creates an OpenGL context //Initialize the app g_app = new App(); g_app->PreInit(); g_app->Init(); g_app->PostInit(); <-- Loads the texture (after the context is created) ...and debug printing to the console CONFIRMS that the OpenGL context was created before the texture loading was attempted. So my question is if anyone is familiar with this specific error, or knows if there is a specific instance as to why SOIL would think OpenGL isn't initialized yet.

    Read the article

< Previous Page | 705 706 707 708 709 710 711 712 713 714 715 716  | Next Page >