Search Results

Search found 16984 results on 680 pages for 'business logic layer'.

Page 552/680 | < Previous Page | 548 549 550 551 552 553 554 555 556 557 558 559  | Next Page >

  • JavaOne 2013: (Key) Notes of a conference – State of the Java platform and all the roadmaps by Amis

    - by JuergenKress
    Last week’s JavaOne conference provided insights in the roadmap of the Java platform as well as in the current state of things in the Java community. The close relationship between Oracle and IBM concerning Java, the (continuing) lack of such a relationship with Google, the support from Microsoft for Java applications on its Azure cloud and the vibrant developer community – with over 200 different Java User Groups in many countries of the world. There were no major surprises or stunning announcements. Java EE 7 (release in June) was celebrated, the progress of Java 8 SE explained as well as the progress on Java Embedded and ME. The availability of NetBeans 7.4 RC1 and JDK 8 Early Adopters release as well as the open sourcing of project Avatar probably were the only real news stories. The convergence of JavaFX and Java SE is almost complete; the upcoming alignment of Java SE Embedded and Java ME is the next big consolidation step that will lead to a unified platform where developers can use the same skills, development tools and APIs on EE, SE, SE Embedded and ME development. This means that anything that runs on ME will run on SE (Embedded) and EE – not necessarily the reverse because not all SE APIs are part of the compact profile or the ME environment. However, the trimming down of the SE libraries and the increased capabilities of devices mean that a pretty rich JVM runs on many devices – such as JavaFX 8 on the Raspberry PI. The major theme of the conference was Internet of Things. A world of things that are smart and connected, devices like sensors, cameras and equipment from cars, fridges and television sets to printers, security gates and kiosks that all run Java and are all capable of sending data over local network connections or directly over the internet. The number of devices that has these capabilities is rapidly growing. This means that the number of places where Java programs can help program the behavior of devices is growing too. It also means that the volume of data generated is expanding and that we have to find ways to harvest that data, possibly do a local pre-processing (filter, aggregate) and channel the data to back end systems. Terms typically used are edge devices (small, simple, publishing data), gateways (receiving data from many devices, collecting and consolidating, pre-processing, sending onwards to back end – typically using real time event processing) and enterprise services – receiving the data-turned-information from the gateways to further consolidate, distribute and act upon. A cheap device like the Raspberry PI is a perfect way to get started as a Java developer with what embedded (device) programming means and how interaction with physical input and output takes place. Roadmaps The over all progress on Java is visualized in this overview: Read the full article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Amis,OOW,Oracle OpenWorld,JavaOne,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • XNA WP7 Texture memory and ContentManager

    - by jlongstreet
    I'm trying to get my WP7 XNA game's memory under control and under the 90MB limit for submission. One target I identified was UI textures, especially fullscreen ones, that would never get unloaded. So I moved UI texture loads to their own ContentManager so I can unload them. However, this doesn't seem to affect the value of Microsoft.Phone.Info.DeviceExtendedProperties.GetValue("ApplicationCurrentMemoryUsage"), so it doesn't look like the memory is actually being released. Example: splash screens. In Game.LoadContent(): Application.Instance.SetContentManager("UI"); // set the current content manager for (int i = 0; i < mSplashTextures.Length; ++i) { // load the splash textures mSplashTextures[i] = Application.Instance.Content.Load<Texture2D>(msSplashTextureNames[i]); } // set the content manager back to the global one Application.Instance.SetContentManager("Global"); When the initial load is done and the title screen starts, I do: Application.Instance.GetContentManager("UI").Unload(); The three textures take about 6.5 MB of memory. Before unloading the UI ContentManager, I see that ApplicationCurrentMemoryUsage is at 34.29 MB. After unloading the ContentManager (and doing a full GC.Collect()), it's still at 34.29 MB. But after that, I load another fullscreen texture (for the title screen background) and memory usage still doesn't change. Could it be keeping the memory for these textures allocated and reusing it? edit: very simple test: protected override void LoadContent() { // Create a new SpriteBatch, which can be used to draw textures. spriteBatch = new SpriteBatch(GraphicsDevice); PrintMemUsage("Before texture load: "); // TODO: use this.Content to load your game content here red = this.Content.Load<Texture2D>("Untitled"); PrintMemUsage("After texture load: "); } private void PrintMemUsage(string tag) { Debug.WriteLine(tag + Microsoft.Phone.Info.DeviceExtendedProperties.GetValue("ApplicationCurrentMemoryUsage")); } protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); // TODO: Add your update logic here if (count++ == 100) { GC.Collect(); GC.WaitForPendingFinalizers(); PrintMemUsage("Before CM unload: "); this.Content.Dispose(); GC.Collect(); GC.WaitForPendingFinalizers(); PrintMemUsage("After CM unload: "); red = null; spriteBatch.Dispose(); GC.Collect(); GC.WaitForPendingFinalizers(); PrintMemUsage("After SpriteBatch Dispose(): "); } base.Update(gameTime); } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); // TODO: Add your drawing code here if (red != null) { spriteBatch.Begin(); spriteBatch.Draw(red, new Vector2(0,0), Color.White); spriteBatch.End(); } base.Draw(gameTime); } This prints something like (it changes every time): Before texture load: 7532544 After texture load: 10727424 Before CM unload: 9875456 After CM unload: 9953280 After SpriteBatch Dispose(): 9953280

    Read the article

  • Powerful Lessons in Data from the Presidential Election

    - by Christina McKeon
    Now that we’ve had a few days to recover from the U.S. presidential election, it’s a good time to take a step back from politics and look for the customer experience lessons that we can take away. The most powerful lesson is that when you know more about your base, you will have an advantage over your competition. That advantage will translate into you winning and your competition losing. Michael Scherer of TIME was given access to Obama’s data analysts two days before the election. His account is documented in Inside the Secret World of the Data Crunchers Who Helped Obama Win. What we learned from Scherer’s inside view is how well Obama’s team did in getting the right data, analyzing it, and acting on it. This data team recognized how critical it was to break down data silos within the campaign. As Scherer noted, they created “a single system that merged information from pollsters, fundraisers, field workers, consumer databases, and social-media and mobile contacts with the main Democratic voter files in the swing states.” The Obama analysis was so meticulous that they knew which celebrity and which type of celebrity event would help them maximize campaign contributions. With a single system, their data models became more precise. They determined which messages were more successful with specific demographic groups and that who made the calls mattered. Data analysis also led to many other changes in Obama’s campaign including a new ad buying strategy, using social media and applications to tap into supporters’ friends, and using new social news sites. While we did not have that same inside view into Romney’s campaign, much of the post-mortem coverage indicates that Romney’s team did not have the right analysis. As Peter Hamby of CNN wrote in Analysis: Why Romney Lost, “Romney officials had modeled an electorate that looked something like a mix of 2004 and 2008….” That historical data did not account for the changing demographics in the U.S. Does your organization approach data like the Obama or Romney team? Do you really know your base? How well can you predict what is going to happen in your business? If you haven’t already put together a strategy and plan to know more, this week’s civics lesson is a powerful reason to do it sooner rather than later. Your competitors are probably thinking the same thing that you are!

    Read the article

  • Multitenancy in SQL Azure

    - by cibrax
    If you are building a SaaS application in Windows Azure that relies on SQL Azure, it’s probably that you will need to support multiple tenants at database level. This is short overview of the different approaches you can use for support that scenario, A different database per tenant A new database is created and assigned when a tenant is provisioned. Pros Complete isolation between tenants. All the data for a tenant lives in a database only he can access. Cons It’s not cost effective. SQL Azure databases are not cheap, and the minimum size for a database is 1GB.  You might be paying for storage that you don’t really use. A different connection pool is required per database. Updates must be replicated across all the databases You need multiple backup strategies across all the databases Multiple schemas in a database shared by all the tenants A single database is shared among all the tenants, but every tenant is assigned to a different schema and database user. Pros You only pay for a single database. Data is isolated at database level. If the credentials for one tenant is compromised, the rest of the data for the other tenants is not. Cons You need to replicate all the database objects in every schema, so the number of objects can increase indefinitely. Updates must be replicated across all the schemas. The connection pool for the database must maintain a different connection per tenant (or set of credentials) A different user is required per tenant, which is stored at server level. You have to backup that user independently. Centralizing the database access with store procedures in a database shared by all the tenants A single database is shared among all the tenants, but nobody can read the data directly from the tables. All the data operations are performed through store procedures that centralize the access to the tenant data. The store procedures contain some logic to map the database user to an specific tenant. Pros You only pay for a single database. You only have a set of objects to maintain and backup. Cons There is no real isolation. All the data for the different tenants is shared in the same tables. You can not use traditional ORM like EF code first for consuming the data. A different user is required per tenant, which is stored at server level. You have to backup that user independently. SQL Federations A single database is shared among all the tenants, but a different federation is used per tenant. A federation in few words, it’s a mechanism for horizontal scaling in SQL Azure, which basically uses the idea of logical partitions to distribute data based on certain criteria. Pros You only have a single database with multiple federations. You can use filtering in the connections to pick the right federation, so any ORM could be used to consume the data. Cons There is no real isolation at that database level. The isolation is enforced programmatically with federations.

    Read the article

  • How quickly to leave contract-to-hire gig where you don't want to be hired? [closed]

    - by nono
    So you move to a big new city with tons of software development opportunity, having taken a six month contract-to-hire job. The company treats you really well and has a good team and work environment. However, the recruiter assured you when offering the gig that it would be a good position in which you can advance your learning from more senior developers (a primary concern of yours) but you're starting to realize that a job recruiter isn't going to understand that the team in question isn't very up on modern software practices (you start to sympathize with this guy and read his post over and over again: http://stackoverflow.com/questions/1586166/career-killer-nhibernate-oop-design-patterns-domain-driven-design-test-driv) and that much of the company's software is very old and very very poorly architected, and the company (like so many others) seems to be only concerned with continually extending the software without investing in any structural improvements. You're absolutely dismayed at how long it takes your team (including) to fulfill simple feature requests (maybe 500-1000% longer than with better designed software that you've worked on in the past), but no one else there seems to think anything of it. You find that the work and the company's business are intensely uninteresting to you, but due to the convoluted design of their various software systems, fulfilling the work will require as much mental engagement as any other development position. You feel a bit naive about not having asked the right questions during your interview process, and for not having anticipated that your team at your former podunk company might possibly be light-years ahead of any team in Big Shiny City, but you know you don't want to stay at this place, and (were it not for your personal, after-hours studying and personal programming efforts) fear that you might actually give a worse interview after completing your 6 months than you did when you started at the place. You read about how hard of a time local companies are having filling their positions with qualified software development candidates. You read all sorts of fabulous sounding job postings online and feel like you're really missing out. In spite of the comfortable environment you feel like you would willingly accept a somewhat more demanding or aggressive lifestyle to feel like you are learning and progressing and producing something meaningful. My questions are: how quickly do you leave and how do you go about giving a polite reason for departing? The contract is written to allow them to "can" you and to allow you to leave with 2 weeks notice. Do you ethically owe the 6 months? Upon taking the position, the company told you they were not interested in candidates who were intending to only stay for 6 months and then leave (you were not intending to bail after 6 months, at that time), so perhaps they might be fine if you split now, knowing that you don't want to stick around for the full time hire?

    Read the article

  • "ODM" - One of the Support team's most valued acronyms

    - by graham.mckendry(at)oracle.com
    If you submit technical service requests (SRs) through the My Oracle Support portal, you may often see the term "ODM" used in updates from our Support team. ODM is an acronym for "Oracle Diagnostic Methodology", which defines a standard problem solving approach that all of Oracle Support uses for every technical SR. ODM provides a number of benefits to the SRs - both for the Support organization and for the customer - including a consistent approach, higher quality, justified solutions, and ultimately faster resolution. Screenshot: Example of an ODM "Issue Clarification" activity in a service request The Oracle Diagnostic Methodology applies to both categories of technical SRs: Consultative (question-answer topics) and Problem-Solution. There are a few KM Notes that describe the steps of ODM, however to keep things simple (and since those KM Notes appear to be a bit outdated), I'll summarize the ODM stages here as follows: Consultative ODM - Three mandatory stages: ODM Question: Clarification of the customer's exact question. ODM Answer: Thorough answer to the customer's question. ODM Knowledge Content: Reference to new or existing knowledge base content, or explanation why the particular SR does not necessarily require knowledge content. Problem-Solution ODM - Eight mandatory stages: ODM Issue Clarification: Clarification of the reported issue, including the symptoms, the steps to reproduce, and an outline of the business impact ODM Issue Verification: Confirmation of the issue being verified based on proof provided by the customer, such as screenshots, log files, or reproducing the issue during an Oracle Web Conference. ODM Cause Determination: Succinct outline of the root cause of the issue. ODM Cause Justification: Explanation as to why the root cause applies to this particular situation. ODM Proposed Solution(s): Succinct outline of the potential solution(s) to resolve the issue. ODM Proposed Solution(s) Justification: Explanation of why the proposed solution(s) will in fact resolve the issue. ODM Solution Action Plan: Detailed numbered instructions on how to execute the proposed solutions. ODM Knowledge Content: Reference to new or existing knowledge base content, or explanation why the particular SR does not necessarily require knowledge content. During these stages, you may see other optional ODM-related activities such as "ODM Data Collection", "ODM Action Plan", "ODM Research", and "ODM Test Case". Again, these structured tags help ensure a uniform methodology across your SRs. With this knowledge you should be able to develop better predictability of what's coming next in your SRs, as well as what you can do to help expedite the resolution process.

    Read the article

  • Software Architecture Analysis Method (SAAM)

    Software Architecture Analysis Method (SAAM) is a methodology used to determine how specific application quality attributes were achieved and how possible changes in the future will affect quality attributes based on hypothetical cases studies. Common quality attributes that can be utilized by this methodology include modifiability, robustness, portability, and extensibility. Quality Attribute: Application Modifiability The Modifiability quality attribute refers to how easy it changing the system in the future will be. This to me is a very open-ended attribute because a business could decide to transform a Point of Sale (POS) system in to a Lead Tracking system overnight. (Yes, this did actually happen to me) In order for SAAM to be properly applied for checking this attribute specific hypothetical case studies need to be created and review for the modifiability attribute due to the fact that various scenarios would return various results based on the amount of changes. In the case of the POS change out a payment gateway or adding an additional payment would have scored very high in comparison to changing the system over to a lead management system. I personally would evaluate this quality attribute based on the S.O.I.L.D Principles of software design. I have found from my experience the use of S.O.I.L.D in software design allows for the adoption of changes within a system. Quality Attribute: Application Robustness The Robustness quality attribute refers to how an application handles the unexpected. The unexpected can be defined but is not limited to anything not anticipated in the originating design of the system. For example: Bad Data, Limited to no network connectivity, invalid permissions, or any unexpected application exceptions. I would personally evaluate this quality attribute based on how the system handled the exceptions. Robustness Considerations Did the system stop or did it handle the unexpected error? Did the system log the unexpected error for future debugging? What message did the user receive about the error? Quality Attribute: Application Portability The Portability quality attribute refers to the ease of porting an application to run in a new operating system or device. For example, It is much easier to alter an ASP.net website to be accessible by a PC, Mac, IPhone, Android Phone, Mini PC, or Table in comparison to desktop application written in VB.net because a lot more work would be involved to get the desktop app to the point where it would be viable to port the application over to the various environments and devices. I would personally evaluate this quality attribute based on each new environment for which the hypothetical case study identifies. I would pay particular attention to the following items. Portability Considerations Hardware Dependencies Operating System Dependencies Data Source Dependencies Network Dependencies and Availabilities  Quality Attribute: Application Extensibility The Extensibility quality attribute refers to the ease of adding new features to an existing application without impacting existing functionality. I would personally evaluate this quality attribute based on each new environment for the following Extensibility  Considerations Hard coded Variables versus Configurable variables Application Documentation (External Documents and Codebase Documentation.) The use of Solid Design Principles

    Read the article

  • OpenGL Fast-Object Instancing Error

    - by HJ Media Studios
    I have some code that loops through a set of objects and renders instances of those objects. The list of objects that needs to be rendered is stored as a std::map, where an object of class MeshResource contains the vertices and indices with the actual data, and an object of classMeshRenderer defines the point in space the mesh is to be rendered at. My rendering code is as follows: glDisable(GL_BLEND); glEnable(GL_CULL_FACE); glDepthMask(GL_TRUE); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnable(GL_DEPTH_TEST); for (std::map<MeshResource*, std::vector<MeshRenderer*> >::iterator it = renderables.begin(); it != renderables.end(); it++) { it->first->setupBeforeRendering(); cout << "<"; for (unsigned long i =0; i < it->second.size(); i++) { //Pass in an identity matrix to the vertex shader- used here only for debugging purposes; the real code correctly inputs any matrix. uniformizeModelMatrix(Matrix4::IDENTITY); /** * StartHere fix rendering problem. * Ruled out: * Vertex buffers correctly. * Index buffers correctly. * Matrices correct? */ it->first->render(); } it->first->cleanupAfterRendering(); } geometryPassShader->disable(); glDepthMask(GL_FALSE); glDisable(GL_CULL_FACE); glDisable(GL_DEPTH_TEST); The function in MeshResource that handles setting up the uniforms is as follows: void MeshResource::setupBeforeRendering() { glEnableVertexAttribArray(0); glEnableVertexAttribArray(1); glEnableVertexAttribArray(2); glEnableVertexAttribArray(3); glEnableVertexAttribArray(4); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, iboID); glBindBuffer(GL_ARRAY_BUFFER, vboID); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), 0); // Vertex position glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*) 12); // Vertex normal glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*) 24); // UV layer 0 glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (const GLvoid*) 32); // Vertex color glVertexAttribPointer(4, 1, GL_UNSIGNED_SHORT, GL_FALSE, sizeof(Vertex), (const GLvoid*) 44); //Material index } The code that renders the object is this: void MeshResource::render() { glDrawElements(GL_TRIANGLES, geometry->numIndices, GL_UNSIGNED_SHORT, 0); } And the code that cleans up is this: void MeshResource::cleanupAfterRendering() { glDisableVertexAttribArray(0); glDisableVertexAttribArray(1); glDisableVertexAttribArray(2); glDisableVertexAttribArray(3); glDisableVertexAttribArray(4); } The end result of this is that I get a black screen, although the end of my rendering pipeline after the rendering code (essentially just drawing axes and lines on the screen) works properly, so I'm fairly sure it's not an issue with the passing of uniforms. If, however, I change the code slightly so that the rendering code calls the setup immediately before rendering, like so: void MeshResource::render() { setupBeforeRendering(); glDrawElements(GL_TRIANGLES, geometry->numIndices, GL_UNSIGNED_SHORT, 0); } The program works as desired. I don't want to have to do this, though, as my aim is to set up vertex, material, etc. data once per object type and then render each instance updating only the transformation information. The uniformizeModelMatrix works as follows: void RenderManager::uniformizeModelMatrix(Matrix4 matrix) { glBindBuffer(GL_UNIFORM_BUFFER, globalMatrixUBOID); glBufferSubData(GL_UNIFORM_BUFFER, 0, sizeof(Matrix4), matrix.ptr()); glBindBuffer(GL_UNIFORM_BUFFER, 0); }

    Read the article

  • “It Isn’t Easy At All; Otherwise, Everyone Would Be Doing It”

    - by Kathryn Perry
    A few months ago, JP Saunders (pictured left), who leads the go-to-market initiatives for the Oracle CX Service offering, kicked off a series of articles about modern customer service. He contends that to take care of customers?and the people that support those customers?companies need to make it easy to deliver consistently great experiences. But it’s not easy; it’s an art. The six posts in The Art of Easy series will help you better understand some of the customer service challenges you face and how to avoid common pitfalls. We pulled them all together here in one post for continuity and easy access. Saunders introduces the series with The Art of Easy: Make It Easy To Deliver Great Customer Service Experiences (Part 1). The Art of Easy: Offer Self Service With the Emphasis on Service (Part 2) by David Fulton (pictured left): David Fulton, Director of Product Management, Oracle Service Cloud, shares five tenets of customer self service that move an organization closer to becoming a modern customer service business. Easy Decisions For Complex Problems (Part 3) by Heike Lorenz (pictured right): Heike Lorenz, Director of Global Product Marketing, Policy Automation, writes about automating service policies to ensure that the correct decisions are being applied to the right people. The goal is to nurture the trusted relationships with customers during complex decision-making processes. Moving at the Speed of Easy (Part 4) by Chris Ulmand (pictured left): Chris Omland, Director of Product Management, Oracle Service Cloud, addresses the need for speed to keep up with customers’ expectations. His advice—start with a platform that enables agile innovation, respects a company’s unique needs, and has proven reliability to protect customer relationships. Knowledge Makes It Easy For Everyone (Part 5) by Nav Chakravarti (pictured rig: Vice President Nav Chakravarti, Oracle Service Cloud, talks about managing the knowledge that customers need and want. He coaches readers on delivering answers to customers’ questions easily, in context, with relevance, reliably, and accurately. Making Easy, Both Effective and Efficient (Part 6) by Melinda Uhland (pictured left): Melinda Uhland, Oracle CX Product Management teaches us that happy agents produce happy customers. A Modern Customer Service organization is one that invests in its agents and empowers them with tools to make them efficient and effective, which, in turn, improves customer results.

    Read the article

  • PBCS Hyperion Planning in the Cloud Implementation Workshop

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 Oracle Planning and Budgeting Cloud Service (PBCS) opens up opportunities for organizations of all sizes to streamline planning and forecasting, accelerate deployment, and reduce costs. This one-day in-person workshop is delivered by Oracle Development (free to OPN member partners), and will cover the handoff from selling-to-implementing of PBCS. Although the basic building blocks are the same as with on-premises Planning, there is a paradigm shift when it comes to selling and implementing a Cloud Service solution. The value proposition behind Oracle Planning and Budgeting Cloud Service is all about the deployment model, how it’s sold and how it gets implemented – simplicity, fast adoption and flexible deployment, without sacrificing first-class functionality. To be successful, the entire cycle from sales to implementation should consistently support this value proposition to your clients. This training event is for OPN member partners whose business roles involve presales, implementation consulting, and support. This workshop briefly reviews the sales approach, as background, with emphasis on partner sales support. The main objective is to learn what is needed to successfully implement Oracle Planning and Budgeting Cloud Service once the sales hand off is made – how to leverage your current Hyperion Planning knowledge and use the features designed specifically to build out a Cloud Service solution. This Workshop is being offered at three locations for partners from all countries in EMEA: June 24, 2014: Kista, Sweden June 26, 2014: Reading, United Kingdom June 29-30, 2014 (split days): Dubaï, United Arab Emirates To get more information, to check pre-requisites, and to register, click here. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Searching for context in Silverlight applications

    - by PeterTweed
    A common behavior in business applications that have developed through the ages is for a user to be able to get information or execute commands in relation to some information/function displayed by right clicking the object in question and popping up a context menu that offers relevant options to choose. The Silverlight Toolkit April 2010 release introduced the context menu object.  This can be added to other UI objects and display options for the user to choose.  The menu items can be enabled or disabled as per your application logic and icons can be added to the menu items to add visual effect.  This post will walk you through how to use the context menu object from the Silverlight Toolkit. Steps: 1. Create a new Silverlight 4 application 2. Copy the following namespace definition to the user control object of the MainPage.xaml file: xmlns:my="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Input.Toolkit"   3. Copy the following XAML into the LayoutRoot grid in MainPage.xaml:          <Border CornerRadius="15" Background="Blue" Width="400" Height="100">             <TextBlock Foreground="White" FontSize="20" Text="Context Menu In This Border...." HorizontalAlignment="Center" VerticalAlignment="Center" >             </TextBlock>             <my:ContextMenuService.ContextMenu>                 <my:ContextMenu >                     <my:MenuItem                 Header="Copy"                 Click="CopyMenuItem_Click" Name="copyMenuItem">                         <my:MenuItem.Icon>                             <Image Source="copy-icon-small.png"/>                         </my:MenuItem.Icon>                     </my:MenuItem>                     <my:Separator/>                     <my:MenuItem Name="pasteMenuItem"                 Header="Paste"                 Click="PasteMenuItem_Click">                         <my:MenuItem.Icon>                             <Image Source="paste-icon-small.png"/>                         </my:MenuItem.Icon>                     </my:MenuItem>                 </my:ContextMenu>             </my:ContextMenuService.ContextMenu>         </Border>   The above code associates a context menu with two menu items and a separator between them to the border object.  The menu items has icons associated with them to add visual appeal.  The menu items have click event handlers that will be added in the MainPage.xaml.cs code behind in a later step. 4. Add two icon sized images to the ClientBin directory of the web project hosting the Silverlight application, named copy-icon-small.png and paste-icon-small.jpg respectively.  I used copy and paste icons as the names suggest. 5. Add the following code to the class in MainPage.xaml.cs file:         private void CopyMenuItem_Click(object sender, RoutedEventArgs e)         {             MessageBox.Show("Copy selected");         }           private void PasteMenuItem_Click(object sender, RoutedEventArgs e)         {             MessageBox.Show("Paste selected");         }   This code adds the event handlers for the menu items defined in step 3. 6. Run the application, right click on the border and select a menu option and see the appropriate message box displayed. Congratulations it’s that easy!   Take the Slalom Challenge at www.slalomchallenge.com!

    Read the article

  • Some Problems Can't Be Outsourced

    - by mikef
    More and more companies are becoming attracted to the idea of Infrastructure as a Service (or IaaS). It would seem that you can outsource the provisioning and management of your services, encompassing everything from Email, through to your servers, workstations and software, all the way down to your LAN and internet services. This type of outsourcing can be a very attractive option for companies who have tight budgets who are short of technical skills or don't have the means to provide long-term IT support. Essentially, they can outsource your services at low short-term costs that are knowable and controllable, are quickly and easily scalable, and generate a minimum of hassle for your internal staff. If you want to get a sophisticated IT infrastructure set up in a hurry without the usual high buy-in costs, or the task of finding and hiring the right specialists. It would seem the way to go, particularly when their salesmen are hypnotizing you with oleaginous phrases such as "we are closely aligned with our client organization's core business requirements, providing agile services". It sounds too good to be true, and so it is. Whereas the costs will have initially been calculated on the annual renewal fees and service fees for ongoing support, there are other charges too which aren't so obvious. It can end up costing far more than the conventional solution once you take into account the extra costs, the fees for customization and upgrades. The Total Cost of Ownership (TCO) only becomes apparent when it is too late to extract the company easily from the arrangement. After a few years, these annual fees can add up to more than the initial cost of implementing a traditional in-house system. Worse than that is that you can then lose your power to determine your priorities: When you become reliant on this company, with its own schedule of priorities, to implement every change, however simple, you have effectively lost control of your technical infrastructure. This will make senior management very nervous. There is definitely a requirement for this sort of service. If you urgently need an exceptionally high class of service or more expertise than you currently possess, then outsourcing is probably for you. You and your IT colleagues will always have something to do, be it user assistance, smoothing out integrations with an external provider, or working on something entirely new. Heck, if you outsource to IBM, the SysAdmins can go along for the ride and polish their expertise. What you need to figure out is how much your time is worth, because time is ultimately all that outsourcing will buy you and your organization. Now you just need to convince your nervous CEO. Cheers, Michael

    Read the article

  • JCP.next.3: time to get to work

    - by Patrick Curran
    As I've previously reported in this blog, we planned three JSRs to improve the JCP’s processes and to meet our members’ expectations for change. The first - JCP.next.1, or more formally JSR 348: Towards a new version of the Java Community Process - was completed in October 2011. This focused on a small number of simple but important changes to make our process more transparent and to enable broader participation. We're already seeing the benefits of these changes as new and existing JSRs adopt the new requirements. However, because we wanted to complete this JSR quickly we deliberately postponed a number of more complex items, including everything that would require modifying the JSPA (the legal agreement that members sign when they join the organization) to a follow-on JSR. The second JSR (JSR 355: JCP Executive Committee Merge) is in progress now and will complete later this year. This JSR is even simpler than the first, and is focused solely on merging the two Executive Committees into one for greater efficiency and to encourage synergies between the Java ME and Java SE platforms. Continuing the momentum to move Java and the JCP forward we have just filed the third JSR (JCP.next.3) as JSR 358: A major revision of the Java Community Process. This JSR will modify the JSPA as well as the Process Document, and will tackle a large number of complex issues, many of them postponed from JSR 348. For these reasons we expect to spend a considerable amount of time working on it - at least a year, and probably more. The current version of the JSPA was created back in 2002, although some minor changes were introduced in 2005. Since then the organization and the environment in which we operate have changed significantly, and it is now time to revise our processes to ensure that they meet our current needs. We have a long list of topics to be considered, including the role of independent implementations (those not derived from the Reference Implementation), licensing and open source, ensuring that our new transparency requirements are implemented correctly, compatibility policy and TCKs, the role of individual members, patent policy, and IP flow. The Expert Group for JSR 358, as with all process-change JSRs, consists of all members of the Executive Committees. Even though the JSR has just been filed we started discussions on the various topics several months ago (see the EC's meeting minutes for details) and our EC members - including the new members who joined within the last year or two - are actively engaged. Now it's your opportunity to get involved. As required by version 2.8 of our Process (introduced with JSR 348) we will conduct all our business in the open. We have a public java.net project where you can follow and participate in our work. All of our deliberations will be copied to a public Observer mailing list, we'll track our issues on a public Issue Tracker, and all our documents (meeting agendas and minutes, task lists, working drafts) will be published in our Document Archive. We're just getting started, but we do want your input. Please visit us on java.net where you can learn how to participate. Let's get to work...

    Read the article

  • Is my class structure good enough?

    - by Rivten
    So I wanted to try out this challenge on reddit which is mostly about how you structure your data the best you can. I decided to challenge my C++ skills. Here's how I planned this. First, there's the Game class. It deals with time and is the only class main has access to. A game has a Forest. For now, this class does not have a lot of things, only a size and a Factory. Will be put in better use when it will come to SDL-stuff I guess A Factory is the thing that deals with the Game Objects (a.k.a. Trees, Lumberjack and Bears). It has a vector of all GameObjects and a queue of Events which will be managed at the end of one month. A GameObject is an abstract class which can be updated and which can notify the Event Listener The EventListener is a class which handles all the Events of a simulation. It can recieve events from a Game Object and notify the Factory if needed, the latter will manage correctly the event. So, the Tree, Lumberjack and Bear classes all inherits from GameObject. And Sapling and Elder Tree inherits from Tree. Finally, an Event is defined by an event_type enumeration (LUMBERJACK_MAWED, SAPPLING_EVOLUTION, ...) and an event_protagonists union (a GameObject or a pair of GameObject (who killed who ?)). I was quite happy at first with this because it seems quite logic and flexible. But I ended up questionning this structure. Here's why : I dislike the fact that a GameObject need to know about the Factory. Indeed, when a Bear moves somewhere, it needs to know if there's a Lumberjack ! Or it is the Factory which handles places and objects. It would be great if a GameObject could only interact with the EventListener... or maybe it's not that much of a big deal. Wouldn't it be better if I separate the Factory in three vectors ? One for each kind of GameObject. The idea would be to optimize research. If I'm looking do delete a dead lumberjack, I would only have to look in one shorter vector rather than a very long vector. Another problem arises when I want to know if there is any particular object in a given case because I have to look for all the gameObjects and see if they are at the given case. I would tend to think that the other idea would be to use a matrix but then the issue would be that I would have empty cases (and therefore unused space). I don't really know if Sapling and Elder Tree should inherit from Tree. Indeed, a Sapling is a Tree but what about its evolution ? Should I just delete the sapling and say to the factory to create a new Tree at the exact same place ? It doesn't seem natural to me to do so. How could I improve this ? Is the design of an Event quite good ? I've never used unions before in C++ but I didn't have any other ideas about what to use. Well, I hope I have been clear enough. Thank you for taking the time to help me !

    Read the article

  • Summit 2014 Registration Is Open

    - by KemButller
    Attention Oracle (employees) Field Team and Oracle JD Edwards Partners REGISTRATION IS NOW OPEN for Oracle's 5th Annual JD Edwards Summit - Monday, January 27th through Friday, January 31st, 2014, in Broomfield, Colorado. The theme of this year's Summit is "Success Through Continued Innovation”.  Our goals are to update you on our current and future product roadmap, new products, selling strategies for new prospects, growing the footprint in our JD Edwards install base, as well as providing a venue for networking.  The JD Edwards Summit is to the selling and servicing community what COLLABORATE is to the user community.  This is a MUST ATTEND event if you recommend, sell, implement and/or support the JD Edwards product, whether you are new to JD Edwards or a seasoned pro, an executive, account executive, in presales or in consulting.  The Summit promises a content-rich and unique networking experience for all attendees. Highlights include:  Monday afternoon kicks off the Summit with a variety of workshops as well as an afternoon preview of the Sponsor Showcase.  Start your networking at the Summit kickoff party Monday evening. Tuesday morning features several informative keynotes in the Summit General Assembly followed by key messages delivered in Super Sessions in the afternoon, focused on each of the JD Edwards community audiences. The educational offerings continue on Wednesday and Thursday with over 90 breakout sessions on topics spanning technology, applications (core JD Edwards, Edge, Fusion), Sales, Presales and Implementation. Friday concludes with new workshops for the implementation community.A Attendees will be enriched with numerous opportunities to network with fellow partners and Oracle throughout the week.  Consider bringing your team and using this venue to hold your own organization kickoff meeting prior to or post Summit. Contact Sheila Ebbitt (Sheila.ebbitt@oracle-DOT-com) for further assistance with your planning.  Attendees will be charged a Summit fee of US$ 250. Online registration cut-off is January 17, 2014. All registration requests after that time will be processed on-site at the event with an attendee fee of US$ 500. Please contact Rene Chapman (rene.chapman@oracle-DOT-com) for information on sponsorship opportunities. For further details on the JD Edwards Summit including agenda, workshops, educational sessions, lodging,  sponsors and Summit registration, click here! Register now! This is going to be an awesome event! John Schiff Vice President JD Edwards Business Development 

    Read the article

  • Play in NetBeans IDE (Part 2)

    - by Geertjan
    Peter Hilton was one of many nice people I met for the first time during the last few days constituting JAX London. He did a session today on the Play framework which, if I understand it correctly, is an HTML5 framework. It doesn't use web.xml, Java EE, etc. It uses Scala internally, as well as in its templating language.  Support for Play would, I guess, based on the little I know about it right now, consist of extending the HTML5 application project, which is new in NetBeans IDE 7.3. The workflow I imagine goes as follows. You'd create a new HTML5 application project, at which point you can choose a variety of frameworks and templates (Coffee Script, Angular, etc), which comes out of the box with the HTML5 support (i.e., Project Easel) in NetBeans IDE 7.3. Then, once the project is created, you'll right-click it and go to the Project Properties dialog, where you'll be able to enable Play support: At this stage, i.e., when you've checked the checkbox above and then clicked OK, all the necessary Play files will be added to your project, e.g., the routes file and the application.conf, for example. And then you have a Play application. Creating support in this way entails nothing more than creating a module that looks like this, i.e., with one Java class, where even the layer.xml file below is superfluous: All the code in the PlayEnablerPlanel.java that you see above is as follows: import java.awt.BorderLayout; import javax.swing.JCheckBox; import javax.swing.JComponent; import javax.swing.JPanel; import org.netbeans.spi.project.ui.support.ProjectCustomizer; import org.netbeans.spi.project.ui.support.ProjectCustomizer.Category; import org.openide.util.Lookup; public class PlayEnablerPanel implements ProjectCustomizer.CompositeCategoryProvider {     @ProjectCustomizer.CompositeCategoryProvider.Registration(             projectType = "org.netbeans.modules.web.clientproject",             position = 1000)     public static PlayEnablerPanel enablePlay() {         return new PlayEnablerPanel();     }     @Override     public Category createCategory(Lookup lkp) {         return ProjectCustomizer.Category.create("Play Framework", "Configure Play", null);     }     @Override     public JComponent createComponent(Category ctgr, Lookup lkp) {         JPanel playPanel = new JPanel(new BorderLayout());         playPanel.add(new JCheckBox("Enable Play"), BorderLayout.NORTH);         return playPanel;     } } Looking forward to having a beer with Peter soon (he lives not far away, in Rotterdam) to discuss this! Also read Part 1 of this series, which I wrote some time ago, and which has other ideas and considerations.

    Read the article

  • Oracle UK Technology "Tech Fest"

    - by rituchhibber
    ** As a priority partner, we are sending you advance notice of these exclusive “Technology Test Fest” free examination sessions. Please note that this communication will be sent out to the wider community one week from today, so please register immediately to secure your place! ** We are delighted to offer you the exclusive opportunity to register and attend the Oracle UK “Technology Test Fest” being held as Part of the UKOUG Conference at the ICC in Birmingham in the Drawing Room at the Hyatt Regency hotel adjacent to the ICC venue, from 3rd to 5th December  2012. This is your opportunity to sit your chosen Oracle Technology Specialist Implementation Exam free of charge on this day. Four sessions are being run (10.00AM and 14.00PM), with just 15 places at each session – so register now to avoid disappointment! (Exams take about 1.5 hours to complete.) Which Implementation Specialist Exams are available to take? Click here to see the list of exams available for you to sit for free at the Oracle UKOUG “Technology Test Fest”. The links also include the study guide for the particular exam. Please review the Specialization Guide as well. How do I register for the Oracle UK “Technology Test Fest”? Fill out the Pearson Vue profile here and complete it with your OPN Company ID. NB: Instructions on how to create/update the profile can be found here. Register for one of the 4 sessions using the registration links at the top of this page. 03rd December, 2012 at 14:00 04th December, 2012 at 10:00 am (Morning Session) 04th December, 2012 at 14:00 (Afternoon Session) 05th December, 2012 at 10:00 am (Morning Session) VENUE DETAILS: The Drawing Room Hyatt Regency Hotel Birmingham, 2 Bridge Street Birmingham, BI 2JZ 3 - 5 December 2012 You will need to bring your own laptop with 'Windows OS' and a form of identification to be able to take any of the exams. Need Help or Advice? For more information about the tests and Get Specialized programme, please contact: Ishacq Nada. For issues with your profile or any other OPN-related problems, please contact our: Oracle Partner Business Centre or call 08705 194 194. We look forward to welcoming you to the Oracle UK “Technology Test Fest” on the 3rd- 5thDecember 2012! Book early to avoid disappointment.

    Read the article

  • What do you need to know to be a world-class master software developer? [closed]

    - by glitch
    I wanted to bring up this question to you folks and see what you think, hopefully advise me on the matter: let's say you had 30 years of learning and practicing software development in front of you, how would you dedicate your time so that you'd get the biggest bang for your buck. What would you both learn and work on to be a world-class software developer that would make a large impact on the industry and leave behind a legacy? I think that most great developers end up being both broad generalists and specialists in one-two areas of interest. I'm thinking Bill Joy, John Carmack, Linus Torvalds, K&R and so on. I'm thinking that perhaps one approach would be to break things down by categories and establish a base minimum of "software development" greatness. I'm thinking: Operating Systems: completely internalize the core concepts of OS, perhaps gain a lot of familiarity with an OSS one such as Linux. Anything from memory management to device drivers has to be complete second nature. Programming Languages: this is one of those topics that imho has to be fully grokked even if it might take many years. I don't think there's quite anything like going through the process of developing your own compiler, understanding language design trade-offs and so on. Programming Language Pragmatics is one of my favorite books actually, I think you want to have that internalized back to back, and that's just the start. You could go significantly deeper, but I think it's time well spent, because it's such a crucial building block. As a subset of that, you want to really understand the different programming paradigms out there. Imperative, declarative, logic, functional and so on. Anything from assembly to LISP should be at the very least comfortable to write in. Contexts: I believe one should have experience working in different contexts to truly be able to appreciate the trade-offs that are being made every day. Embedded, web development, mobile development, UX development, distributed, cloud computing and so on. Hardware: I'm somewhat conflicted about this one. I think you want some understanding of computer architecture at a low level, but I feel like the concepts that will truly matter will be slightly higher level, such as CPU caching / memory hierarchy, ILP, and so on. Networking: we live in a completely network-dependent era. Having a good understanding of the OSI model, knowing how the Web works, how HTTP works and so on is pretty much a pre-requisite these days. Distributed systems: once again, everything's distributed these days, it's getting progressively harder to ignore this reality. Slightly related, perhaps add solid understanding of how browsers work to that, since the world seems to be moving so much to interfacing with everything through a browser. Tools: Have a really broad toolset that you're familiar with, one that continuously expands throughout the years. Communication: I think being a great writer, effective communicator and a phenomenal team player is pretty much a prerequisite for a lot of a software developer's greatness. It can't be overstated. Software engineering: understanding the process of building software, team dynamics, the requirements of the business-side, all the pitfalls. You want to deeply understand where what you're writing fits from the market perspective. The better you understand all of this, the more of your work will actually see the daylight. This is really just a starting list, I'm confident that there's a ton of other material that you need to master. As I mentioned, you most likely end up specializing in a bunch of these areas as you go along, but I was trying to come up with a baseline. Any thoughts, suggestions and words of wisdom from the grizzled veterans out there who would like to share their thoughts and experiences with this? I'd really love to know what you think!

    Read the article

  • Developing web sites that imitate desktop apps. How to fight that paradigm? [closed]

    - by user1598390
    Supposse there's a company where web sites/apps are designed to resemble desktop apps. They struggle to add: Splash screens Drop-down menus Tab-pages Pages that don't grow downward with content, context is inside scrollable area so page is of a fixed size, as if resembling the one-screen limitation of desktop apps. Modal windows, pop-ups, etc. Tree views Absolutely no access to content unless you login-first, even with non-sensitive content. After splash screen desapears, you are presented with a login screen. No links - just simulated buttons. Fixed page-size. Cannot open a linked in other tab Print button that prints directly ( not showing printable page so the user can't print via the browser's print command ) Progress bars for loading content even when the browser indicates it with its own animation Fonts and color amulate a desktop app made with Visual Basic, PowerBuilder etc. Every app seems almost as if were made in Visual Basic. They reject this elements: Breadcrumbs Good old underlined links Generated/dynamic navigation, usage-based suggestions Ability to open links in multiple tabs Pagination Printable pages Ability to produce a URL you can save or share that links to an item, like when you send someone the link to an especific StackExchange question. The only URL is the main one. Back button To achieve this, tons of javascript code is needed. Lots and lots of Javascript and Ajax code for things not related with the business but with the necessity to hide/show that button, refresh this listbox, grey-out that label, etc. The coplexity generated by forcing one paradigm into another means most lines of code are dedicated to maintain the illusion of a desktop app. What is the best way to change this mindset, and make them embrace the web, and start producing modern, web apps instead of desktop imitations ? EDIT: These sites are intranet sites. Users hate these apps. They constantly whine about them, but they have to use them to do their daily work. These sites are in-house solutions, the end-users have no choice but to use them. They are a "captive audience". Also, substitution will not happen because of high costs. But at least if that mindset is changed, new developments would be more web-like.

    Read the article

  • Office lights on or off in programming department? How to decide? [closed]

    - by smp7d
    At my company, the programmers who sit in the same area are constantly fighting over whether the lights stay on or off. Because there is no official policy it makes it a particularly sticky situation. We are a typical cube-farm and we have those typical cube-farm fluorescent lights and smaller ones at our desks. With the lights off, it is difficult to read and you would probably need to turn on your desk light (which some people do anyway). All programmers in our department do most of their reading on their monitor because of the nature of our business. Some feel that we should have a vote to decide whether the lights stay on or off. A couple who prefer 'lights on' feel that the vote would need to be unanimous to turn them off as having them on is the more natural office setting. Those who want them off point out that all other departments keep their lights off. I have heard all of the arguments: -Fluorescent lights cause eye strain -Reading in dark causes eye strain -The desk lights can be used if light is needed -People from other departments feel uncomfortable approaching us in the "dark" -The monitors are harder to see in the light ... Right now, some of the developers turn off the lights and some turn them on. It really just depends who last walked by the switch. I am a bit sick of the controversy as it feels a bit childish at the moment. I'm tired of hearing about it and I'm tired of having to talk about it. I tried to help them decide but as I explained, voting wasn't enough. Do other programming departments have this same argument? What is the standard or traditionally accepted option in a programming area? Are there any good reasons for one way or the other outside of preference? How can we decide fairly? EDIT Just a little more info... We do not have clients/visitors come into our office. We do have windows and hall lights that make our environment plenty bearable with the lights off. It kind of resembles a meeting room that has the lights off during a powerpoint presentation.

    Read the article

  • SMTP POP3 & PST. Acronyms from Hades.

    - by mikef
    A busy SysAdmin will occasionally have reason to curse SMTP. It is, certainly, one of the strangest events in the history of IT that such a deeply flawed system, designed originally purely for campus use, should have reached its current dominant position. The explanation was that it was the first open-standard email system, so SMTP/POP3 became the internet standard. We are, in consequence, dogged with a system with security weaknesses so extreme that messages are sent in plain text and you have no real assurance as to who the message came from anyway (SMTP-AUTH hasn't really caught on). Even without the security issues, the use of SMTP in an office environment provides a management nightmare to all commercial users responsible for complying with all regulations that control the conduct of business: such as tracking, retaining, and recording company documents. SMTP mail developed from various Unix-based systems designed for campus use that took the mail analogy so literally that mail messages were actually delivered to the users, using a 'store and forward' mechanism. This meant that, from the start, the end user had to store, manage and delete messages. This is a problem that has passed through all the releases of MS Outlook: It has to be able to manage mail locally in the dreaded PST file. As a stand-alone system, Outlook is flawed by its neglect of any means of automatic backup. Previous Outlook PST files actually blew up without warning when they reached the 2 Gig limit and became corrupted and inaccessible, leading to a thriving industry of 3rd party tools to clear up the mess. Microsoft Exchange is, of course, a server-based system. Emails are less likely to be lost in such a system if it is properly run. However, there is nothing to stop users from using local PSTs as well. There is the additional temptation to load emails into mobile devices, or USB keys for off-line working. The result is that the System Administrator is faced by a complex hybrid system where backups have to be taken from Servers, and PCs scattered around the network, where duplication of emails causes storage issues, and document retention policies become impossible to manage. If one adds to that the complexity of mobile phone email readers and mail synchronization, the problem is daunting. It is hardly surprising that the mood darkens when SysAdmins meet and discuss PST Hell. If you were promoted to the task of tormenting the souls of the damned in Hades, what aspects of the management of Outlook would you find most useful for your task? I'd love to hear from you. Cheers, Michael

    Read the article

  • Is the Leptonica implementation of 'Modified Median Cut' not using the median at all?

    - by TheCodeJunkie
    I'm playing around a bit with image processing and decided to read up on how color quantization worked and after a bit of reading I found the Modified Median Cut Quantization algorithm. I've been reading the code of the C implementation in Leptonica library and came across something I thought was a bit odd. Now I want to stress that I am far from an expert in this area, not am I a math-head, so I am predicting that this all comes down to me not understanding all of it and not that the implementation of the algorithm is wrong at all. The algorithm states that the vbox should be split along the lagest axis and that it should be split using the following logic The largest axis is divided by locating the bin with the median pixel (by population), selecting the longer side, and dividing in the center of that side. We could have simply put the bin with the median pixel in the shorter side, but in the early stages of subdivision, this tends to put low density clusters (that are not considered in the subdivision) in the same vbox as part of a high density cluster that will outvote it in median vbox color, even with future median-based subdivisions. The algorithm used here is particularly important in early subdivisions, and 3is useful for giving visible but low population color clusters their own vbox. This has little effect on the subdivision of high density clusters, which ultimately will have roughly equal population in their vboxes. For the sake of the argument, let's assume that we have a vbox that we are in the process of splitting and that the red axis is the largest. In the Leptonica algorithm, on line 01297, the code appears to do the following Iterate over all the possible green and blue variations of the red color For each iteration it adds to the total number of pixels (population) it's found along the red axis For each red color it sum up the population of the current red and the previous ones, thus storing an accumulated value, for each red note: when I say 'red' I mean each point along the axis that is covered by the iteration, the actual color may not be red but contains a certain amount of red So for the sake of illustration, assume we have 9 "bins" along the red axis and that they have the following populations 4 8 20 16 1 9 12 8 8 After the iteration of all red bins, the partialsum array will contain the following count for the bins mentioned above 4 12 32 48 49 58 70 78 86 And total would have a value of 86 Once that's done it's time to perform the actual median cut and for the red axis this is performed on line 01346 It iterates over bins and check they accumulated sum. And here's the part that throws me of from the description of the algorithm. It looks for the first bin that has a value that is greater than total/2 Wouldn't total/2 mean that it is looking for a bin that has a value that is greater than the average value and not the median ? The median for the above bins would be 49 The use of 43 or 49 could potentially have a huge impact on how the boxes are split, even though the algorithm then proceeds by moving to the center of the larger side of where the matched value was.. Another thing that puzzles me a bit is that the paper specified that the bin with the median value should be located, but does not mention how to proceed if there are an even number of bins.. the median would be the result of (a+b)/2 and it's not guaranteed that any of the bins contains that population count. So this is what makes me thing that there are some approximations going on that are negligible because of how the split actually takes part at the center of the larger side of the selected bin. Sorry if it got a bit long winded, but I wanted to be as thoroughas I could because it's been driving me nuts for a couple of days now ;)

    Read the article

  • WebLogic Partner Community Newsletter October 2012

    - by JuergenKress
    Dear WebLogic partner community member Oracle OpenWorld and the JavaOne is just over with lots of product updates and highlights. In this newsletter you will find the key information on many new product and launches. Make sure you download the presentation from our WebLogic Community Workspace (WebLogic Community membership required), to train yourself and for your next customer meeting. Thanks for all the tweets tweets #WebLogicCommunity, the pictures at our facebook page and the nice blog posts from Guido & Lucas & Jan. Java One was a super sucess - JavaOne 2012: Strategy and Technical Keynote - Java 2,5 years after the acquisition - IDC report - make the future Java! If you want to become a Java Expert, make sure you attend one of our WebLogic 12c Bootcamps or our fist ExaLogic Hackers Night - November 19th Nürnberg Germany. All developers can use WebLogic free of charge! For developers, there are lots of ADF news on Oracle ADF Essentials & ADF training material now on the iPad By Grant Ronald & GlassFish Extension for Oracle JDeveloper & Installing, Configuring, and Testing WebLogic Server 12c Developer Zip Distribution in NetBeans. If you want to become a certified WebLogic company, WebLogic Server 12c Specialization is now available for you. You just need to go to the Knowledge Zone section, select the “Specialization” tab and click on “Apply Now” Now available: WebLogic Server 12c Implementation Specialist Boot Camp LVT. Now in Production: Oracle WebLogic Server 12c Implementation Specialist certification (1Z0-599) In our specialization benefit series we highlight this month the opportunity to promote your WebLogic services by google ads. Torsten Winterberg, OFM ACE Director published Mobile Web Applications – A guide for professional development. Please feel free to let us know if you publish a book or article! Hope to see you at the Middleware Day at UK Oracle User Group Conference 2012 in Birmingham. Jürgen Kress Oracle WebLogic Partner Adoption EMEA To read the newsletter please visit http://tinyurl.com/WebLogicnewsOctober2012 (OPN Account required) To become a member of the WebLogic Partner Community please register at http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic Community newsletter,newsletter,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Spotlight on an office - Nairobi, Kenya

    - by Maria Sandu
    Hi everyone, my name is Joash Mitei. I am a graduate Intern at Oracle Systems Kenya and I will briefly take you through our offices and the working environment here in Nairobi, Kenya. I’ve been with Oracle since February 2012 and I’m responsible for Applications Pre-sales focusing on Oracle EPM and E-Business Suite. My background is Finance and Accounting therefore joining Oracle was almost a totally a different ball game but the transition has been smooth. The Oracle offices here are located on the second floor of Mebank Towers. We moved to the 2nd floor just three months ago from the 5th floor mainly because of the growing workforce. We are covering the whole Eastern Africa region hence diversity in culture is evident. This is a plus since you get to interact with people of very different backgrounds, cultures and ways of thinking. The building itself is on the outskirts of the CBD hence free from the hustle and bustle of the town. The office is split into different sections; there is a main working area which has an open desk design that fosters interaction between colleagues, there are 4 conference rooms for meetings and presentations, there are 3 quiet rooms for a little privacy when needed and there is a dining area for meals and ‘hanging out’. The working environment is world-class, to say the least. The employees are very professional, quite smart and needless to say, very busy. There are 4 interns covering sales and pre-sales in both Tech and Apps. As an intern you get support from your supervisor but you are required to show initiative yourself and thus the need to be very pro-active and inquisitive. The local management is well structured and communicative to ensure effectiveness and efficiency in the office. Apart from the daily work, we usually have events to boost staff morale such as ‘TGIF hang -out’, football matches against each other or versus other companies, and team building retreats. All these are monumental in fostering the RED POTENTIAL. We also do numerous CSR activities in the local communities . Well, that’s the Kenyan office for you. Glad to be your tour guide. Have a superb day!

    Read the article

  • iPhone app development pricing [closed]

    - by AlexMorley-Finch
    I currently manage a website design and development company that integrates itself with print work too. I've been co manager for a couple of months now, however business is slow at the moment. I got an email today from a potential client asking if we do iPhone app development. Obviously we do not. But, seeing as there aren't many other project on the go at the moment, why not give it a go? Searching Google was my first option. I did a bit of background scrubbing. Discovered that I need certain developer tools etc. This can be arranged easily enough. The reason I came to Stack Exchange is for answers to these questions. I am a competant programmer and have had experience in Java, PHP, JavaScript, C++ and C#. So how long would it take me to come to grips with Objective-C? Assuming an 8 hour working day. Not only the language but also the iPhone interface libraries (if any exist) and other native libraries? The potential client said the app will be a catalogue. So I'm assuming either the data will be pulled online or in some kind of local database. I know the requirements are vague but does anyone have any idea of how mug time this would take? To learn the language, design the architecture and code? 40 Hours? 80 Hours? My final question kind of depends on the first two. But obviously the potential client wants a quote. And I have got no idea how much an app costs. I did some research and found a huge range in differences. The majority of the cost would cone from the time to develops the app! So to summarise. How long would it take me to be comfortable coding Objective-C, taking into consideration my past knowledge? I'm assuming a day or two to fully understand but you really don't know. How long would it take me to develop an app? How much should I charge (approx) for the development of this app?

    Read the article

< Previous Page | 548 549 550 551 552 553 554 555 556 557 558 559  | Next Page >