Search Results

Search found 17940 results on 718 pages for 'algorithm design'.

Page 617/718 | < Previous Page | 613 614 615 616 617 618 619 620 621 622 623 624  | Next Page >

  • New Wine in New Bottles

    - by Tony Davis
    How many people, when their car shows signs of wear and tear, would consider upgrading the engine and keeping the shell? Even if you're cash-strapped, you'll soon work out the subtlety of the economics, the cost of sudden breakdowns, the precious time lost coping with the hassle, and the low 'book value'. You'll generally buy a new car. The same philosophy should apply to database systems. Mainstream support for SQL Server 2005 ends on April 12; many DBAS, if they haven't done so already, will be considering the migration to SQL Server 2008 R2. Hopefully, that upgrade plan will include a fresh install of the operating system on brand new hardware. SQL Server 2008 R2 and Windows Server 2008 R2 are designed to work together. The improved architecture, processing power, and hyper-threading capabilities of modern processors will dramatically improve the performance of many SQL Server workloads, and allow consolidation opportunities. Of course, there will be many DBAs smiling ruefully at the suggestion of such indulgence. This is nothing like the real world, this halcyon place where hardware and software budgets are limitless, development and testing resources are plentiful, and third party vendors immediately certify their applications for the latest-and-greatest platform! As with cars, or any other technology, the justification for a complete upgrade is complex. With Servers, the extra cost at time of upgrade will generally pay you back in terms of the increased performance of your business applications, reduced maintenance costs, training costs and downtime. Also, if you plan and design carefully, it's possible to offset hardware costs with reduced SQL Server licence costs. In his forthcoming SQL Server Hardware book, Glenn Berry describes a recent case where he was able to replace 4 single-socket database servers with one two-socket server, saving about $90K in hardware costs and $350K in SQL Server license costs. Of course, there are exceptions. If you do have a stable, reliable, secure SQL Server 6.5 system that still admirably meets the needs of a specific business requirement, and has no security vulnerabilities, then by all means leave it alone. Why upgrade just for the sake of it? However, as soon as a system shows sign of being unfit for purpose, or is moving out of mainstream support, the ruthless DBA will make the strongest possible case for a belts-and-braces upgrade. We'd love to hear what you think. What does your typical upgrade path look like? What are the major obstacles? Cheers, Tony.

    Read the article

  • links for 2011-02-18

    - by Bob Rhubart
    VirtualBox: Pre-Built Developer VMs "Learning your way around a new software stack is challenging enough without having to spend multiple cycles on the install process. Instead, we have packaged such stacks into pre-built Oracle VM VirtualBox appliances that you can download, install, and experience as a single unit." (tags: oracle virtualization virtualbox) Java Space on Parleys (The Java Source) "'Oracle partnered with Stephan Janssen, founder of Parleys to make this happen. Parleys website offers a user friendly experience to view online content. You can download some of the talks to your desktop or watch them on the go on mobile devices." (tags: oracle java parleys) Why ADF Developers Should Attend ODTUG This Year (Shay Shmeltzer's Weblog) Shay says: "A new track called the "Fusion Middleware" track has been formed and it has lots of sessions for any level of ADF developer. The track is run by several Oracle ACEs who are also involved in the ADF Enterprise Methodology Group." (tags: oracle otn odtug fusionmiddleware) Wrapping up an Exciting Mobile World Congress (The Java Source) "One of the more popular topics in our booth was the use of Java in the Smart Grid. In our booth we were showing off some of the work of the Hydra Consortium whose goal it is to leverage the emerging smart grid infrastructure to securely enable the delivery of personal health data..." (tags: oracle java smartgrid) How to Audit and Monitor BI Publisher Reports Access? (Oracle BI Publisher Blog) "Do you know who is accessing to which report at what time at your reporting environment ? As you delivered the BI Publisher reports to the production environment and your users start using them as part of their daily business operations you might wonder such questions." (tags: oracle otn businessintelligence) Oracle VM VirtualBox 4.0.4 Released! (Oracle's Virtualization Blog) Fat Bloke says: "Oracle made a maintenance update release of Oracle VM VirtualBox version 4.0.4 today. You can Download it now, or read about the changes in the ChangeLog." (tags: oracle otn virtualization virtualbox) Obama says Cloud and Data Center Consolidation Will Help Curb IT Costs | WHIR Web Hosting Industry News "In the report, he estimated that the federal government could reallocate some $20 billion of IT spending to cloud computing technologies and reduce 'data center infrastructure expenditure by approximately 30 percent' through cloud computing." (tags: cloud obama datacenter) Chris Muir: ADF BC: Creating an "EXISTS" View Criteria Oracle ACE Director Chris Muir shares some ADF tips. (tags: oracle otn oracleace adf) Translation and Multiple Languages with Oracle UCM | Bex Huff Bex says: "Last year, I gave a presentation at Oracle Open World about Creating and Maintaining an Internationalized Web Site. Well, I'm happy to announce that one of the several add-ons to UCM is now available for purchase!" (tags: oracle otn enterprise2.0 ecm oracleace) ORACLENERD: Design Documentation Oracle ACE Chet "ORACLENERD" Justice makes a pledge. (tags: oracle otn oracleace database)

    Read the article

  • SQL SERVER – BI Quiz Hint – Performance Tuning Cubes – Hints

    - by pinaldave
    I earlier wrote about SQL BI Quiz over here and here. The details of the quiz is here: Working with huge data is very common when it is about Data Warehousing. It is necessary to create Cubes on the data to make it meaningful and consumable. There are cases when retrieving the data from cube takes lots of the time. Let us assume that your cube is returning you data very quickly. Suddenly on one day it is returning the data very slowly. What are the three things will you to diagnose this. After diagnose what you will do to resolve performance issue. Participate in my question over here I required BI Expert Jason Thomas to help with few hints to blog readers. He is one of the leading SSAS expert and writes a complicated subject in simple words. If queries were executing properly before but now take a long time to return the data, it means that there has been a change in the environment in which it is running. Some possible changes are listed below:-  1) Data factors:- Compare the data size then and now. Increase in data can result in different execution times. Poorly written queries as well as poor design will not start showing issues till the data grows. How to find it out? (Ans : SQL Server profiler and Perfmon Counters can be used for identifying the issues and performance  tuning the MDX queries)  2) Internal Factors:- Is some slow MDX query / multiple mdx queries running at the same time, which was not running when you had tested it before? Is there any locking happening due to proactive caching or processing operations? Are the measure group caches being cleared by processing operations? (Ans : Again, profiler and perfmon counters will help in finding it out. Load testing can be done using AS Performance Workbench (http://asperfwb.codeplex.com/) by running multiple queries at once)  3) External factors:- Is some other application competing for the same resources?  HINT : Read “Identifying and Resolving MDX Query Performance Bottlenecks in SQL Server 2005 Analysis Services” (http://sqlcat.com/whitepapers/archive/2007/12/16/identifying-and-resolving-mdx-query-performance-bottlenecks-in-sql-server-2005-analysis-services.aspx) Well, these are great tips. Now win big prizes by participate in my question over here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SSIS and StreamInsight Working Together.

    I have been thinking a lot recently about what it would be like to have StreamInsight and SSIS working together.  Well the CAT team have produced a paper on some of our options here. Here are some of my thoughts. There is of course a slight mismatch in their types of usage.  StreamInsight is an Event Stream processing engine capable of operating on new data in the sub second timeframe.  The engine allows you to do real time analytics and take decisions on events that have potentially only just happened.  SSIS on the other hand is a batch processing engine.  In general I do not like having to invoke the same package more than once every 90 seconds or so as it can start to get expensive.  Usually when doing batch processing we have an hour or longer of grace before we have to move data from A –> B. StreamInsight operates on streams of data.  Before anyone mentions it yes I know StreamInsight is equally adept at using the IEnumerable interface, but I would argue live streaming and real-time analytics is a primary goal of the product.  SSIS does not have an “Always On” button I do not like the idea of embedding StreamInsight inside SSIS using a transform particularly.  It means StreamInsight becomes a batch processing engine because it can only operate when the SSIS package is running and SSIS is in charge of when that happens. If I am to have StreamInsight within SSIS then I prefer to have StreamInsight on the adapters.  This way you can force the adapters to stay open and introduce events into your Pipeline.   SSIS has a much richer set of transforms out of the box than StreamInsight.  Although “Always On” was not a design goal of SSIS I have used it like this and it works just fine. SSIS being called from within StreamInsight, now that excites me.  see below   For a while now I have been thinking what it would be like to decouple the Data Flow task from the SSIS package and expose it as something with which you can interact.  Anything can instantiate this version of a DFT as it would expose one or more  input interfaces and one or more output interfaces.  I can imagine that this would be a big hit when moving to “The Cloud” as well.  I could see the Data Flow task maybe being hosted in Azure Appfabric or some such layer. StreamInsight would be able to take advantage of this as well.   I am interested to see where this goes and will be pressing for more meat around the subject when I visit Redmond soon.

    Read the article

  • Spotlight on an office - Dublin!

    - by Tim Koekkoek
    In this third instalment of our monthly topic ‘Spotlight on an Office’, we visit Dublin, Ireland Oracle has 5 offices in Dublin all in the EastPoint Business Park close to Dublin City centre. In Dublin there are currently 1,000 people working for Oracle. You’ll find, among others, a large part of OracleDirect, our inside sales organization, part of our EMEA Finance organization and employees from Product and Systems Development who work on the heart of Oracle’s products. Facilities EastPoint Business Park is located next to the Irish Financial Service Centre (IFSC) and is only one train stop away from Dublin city centre. This seafront business park and nearby amenities cater for staff’s needs, which include a Sandwich Bar, a Coffee Shop and a small Convenience Store and Newsagent. Moreover there is a Physical Therapy Clinic and Beauty Salon onsite, Pilates and Boot Camp classes, weekly WeightWatcher Classes, five football / tennis courts and an outdoor chess board. When the sun is shining On sunny days comfy, colourful beanbags are spread throughout the park to relax and every Wednesday there is the Irish Village Market providing staff with a variety of delicious gourmet foods from all over the world. Friday afternoons after work are often used by Oracle employees to start the weekend socializing in The Epicenter Cafe Bar & Venue. In the office In the Oracle offices, you have an open floor design and an open door policy which makes it really easy to walk over to your colleagues or a manager to discuss your projects and keep informed with what is going on. This way you also have a great chance to bond with your colleagues. In two of the Oracle buildings there are subsidized canteens especially for Oracle employees with chefs cooking something special everyday! One of the best things about Oracle in Dublin is that it is really multinational. Currently there are more than 25 languages spoken by Oracle employees. So you will work with colleagues from all around the globe, every day, which makes it a really interesting and exciting experience. Sport & Social There is also a dedicated Sport and Social Club, Oraclub. They organize many sport and social activities. It doesn’t matter which sport is your favourite, Oraclub caters for like-minded individuals and makes sure you can play or watch your favourite sport. Furthermore, Oraclub organizes exhibition matches to get you acquainted with some other sports. Last year the Gaelic Warriors (A Wheelchair Rugby club) held an exhibition match. Oraclub also offer Oracle parties, language courses and offer discounts on many events! So whether you want to go to a Robbie Williams concert, an exhibition of Van Gogh or a match of the Irish Rugby team, Oraclub is there for everyone! There are also plenty of possibilities to get involved in volunteering. Want to know more about the current vacancies in Dublin? Check https://campus.oracle.com for all of our vacancies.

    Read the article

  • Understanding Data Science: Recent Studies

    - by Joe Lamantia
    If you need such a deeper understanding of data science than Drew Conway's popular venn diagram model, or Josh Wills' tongue in cheek characterization, "Data Scientist (n.): Person who is better at statistics than any software engineer and better at software engineering than any statistician." two relatively recent studies are worth reading.   'Analyzing the Analyzers,' an O'Reilly e-book by Harlan Harris, Sean Patrick Murphy, and Marck Vaisman, suggests four distinct types of data scientists -- effectively personas, in a design sense -- based on analysis of self-identified skills among practitioners.  The scenario format dramatizes the different personas, making what could be a dry statistical readout of survey data more engaging.  The survey-only nature of the data,  the restriction of scope to just skills, and the suggested models of skill-profiles makes this feel like the sort of exercise that data scientists undertake as an every day task; collecting data, analyzing it using a mix of statistical techniques, and sharing the model that emerges from the data mining exercise.  That's not an indictment, simply an observation about the consistent feel of the effort as a product of data scientists, about data science.  And the paper 'Enterprise Data Analysis and Visualization: An Interview Study' by researchers Sean Kandel, Andreas Paepcke, Joseph Hellerstein, and Jeffery Heer considers data science within the larger context of industrial data analysis, examining analytical workflows, skills, and the challenges common to enterprise analysis efforts, and identifying three archetypes of data scientist.  As an interview-based study, the data the researchers collected is richer, and there's correspondingly greater depth in the synthesis.  The scope of the study included a broader set of roles than data scientist (enterprise analysts) and involved questions of workflow and organizational context for analytical efforts in general.  I'd suggest this is useful as a primer on analytical work and workers in enterprise settings for those who need a baseline understanding; it also offers some genuinely interesting nuggets for those already familiar with discovery work. We've undertaken a considerable amount of research into discovery, analytical work/ers, and data science over the past three years -- part of our programmatic approach to laying a foundation for product strategy and highlighting innovation opportunities -- and both studies complement and confirm much of the direct research into data science that we conducted. There were a few important differences in our findings, which I'll share and discuss in upcoming posts.

    Read the article

  • schedule compliance and keeping technical supports and resolving issues

    - by imays
    I am an entrepreneur of a small software developer company. The flagship product is developed by myself and my company grew up to 14 people. One of pride is that we've never have to be invested or loaned. The core development team is 5 people. 3 are seniors and 2 are juniors. After the first release, we've received many issues from our customers. Most of them are bug issues, customization needs, usage questions and upgrade requests. The issues from customers are incoming many times everyday, so it takes little time or much time of our developers. Because of our product is a software development kit(SDK) so most of questions can be answered only from our developers. And, for resolving bug issues, developers must be involved. Estimating time to resolve bug is hard. I fully understand it. However, our developers insist they cannot set the any due date of each project because they are busy doing technical supports and bug fixes by issues from customers everyday. Of course, they never do overwork. I suggested them an idea to divide the team into two parts: one for focusing on development by milestones, other for doing technical supports and bug fixes without setting due days. Then we could announce release plan officially. After the finish of release, two parts exchange the role for next milestone. However, they say they "NO, because it is impossible to share knowledge and design document fully." They still say they cannot set the release date and they request me to alter the due date flexibly. They does not fix the due date of each milestone. Fortunately, our company is not loaned and invested so we are not chocked. But I think it is bad idea to keep this situation. I know the story of ant and grasshopper. Our customers are tired of waiting forever of our release date. Companies consume limited time and money. If flexible due date without limit could be acceptable, could they accept flexible salary day? What is the root cause of our problem? All that I want is to fix and achieve precisely due date of each milestone without losing frequent technical supports. I think there must be solution for this situation. Please answer me. Thanks in advance. PS. Our tools and ways of project management are Trello, Mantis-like issue tracker, shared calendar software and scrum(collected cards into series of 'small and high completeness' projects).

    Read the article

  • Why would Copying a Large Image to the Clipboard Freeze a Computer?

    - by Akemi Iwaya
    Sometimes, something really odd happens when using our computers that makes no sense at all…such as copying a simple image to the clipboard and the computer freezing up because of it. An image is an image, right? Today’s SuperUser post has the answer to a puzzled reader’s dilemna. Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. Original image courtesy of Wikimedia. The Question SuperUser reader Joban Dhillon wants to know why copying an image to the clipboard on his computer freezes it up: I was messing around with some height map images and found this one: (http://upload.wikimedia.org/wikipedia/commons/1/15/Srtm_ramp2.world.21600×10800.jpg) The image is 21,600*10,800 pixels in size. When I right click and select “Copy Image” in my browser (I am using Google Chrome), it slows down my computer until it freezes. After that I must restart. I am curious about why this happens. I presume it is the size of the image, although it is only about 6 MB when saved to my computer. I am also using Windows 8.1 Why would a simple image freeze Joban’s computer up after copying it to the clipboard? The Answer SuperUser contributor Mokubai has the answer for us: “Copy Image” is copying the raw image data, rather than the image file itself, to your clipboard. The raw image data will be 21,600 x 10,800 x 3 (24 bit image) = 699,840,000 bytes of data. That is approximately 700 MB of data your browser is trying to copy to the clipboard. JPEG compresses the raw data using a lossy algorithm and can get pretty good compression. Hence the compressed file is only 6 MB. The reason it makes your computer slow is that it is probably filling your memory up with at least the 700 MB of image data that your browser is using to show you the image, another 700 MB (along with whatever overhead the clipboard incurs) to store it on the clipboard, and a not insignificant amount of processing power to convert the image into a format that can be stored on the clipboard. Chances are that if you have less than 4 GB of physical RAM, then those copies of the image data are forcing your computer to page memory out to the swap file in an attempt to fulfil both memory demands at the same time. This will cause programs and disk access to be sluggish as they use the disk and try to use the data that may have just been paged out. In short: Do not use the clipboard for huge images unless you have a lot of memory and a bit of time to spare. Like pretty graphs? This is what happens when I load that image in Google Chrome, then copy it to the clipboard on my machine with 12 GB of RAM: It starts off at the lower point using 2.8 GB of RAM, loading the image punches it up to 3.6 GB (approximately the 700 MB), then copying it to the clipboard spikes way up there at 6.3 GB of RAM before settling back down at the 4.5-ish you would expect to see for a program and two copies of a rather large image. That is a whopping 3.7 GB of image data being worked on at the peak, which is probably the initial image, a reserved quantity for the clipboard, and perhaps a couple of conversion buffers. That is enough to bring any machine with less than 8 GB of RAM to its knees. Strangely, doing the same thing in Firefox just copies the image file rather than the image data (without the scary memory surge). Have something to add to the explanation? Sound off in the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.

    Read the article

  • Problems implementing a screen space shadow ray tracing shader

    - by Grieverheart
    Here I previously asked for the possibility of ray tracing shadows in screen space in a deferred shader. Several problems were pointed out. One of the most important problem is that only visible objects can cast shadows and objects between the camera and the shadow caster can interfere. Still I thought it'd be a fun experiment. The idea is to calculate the view coordinates of pixels and cast a ray to the light. The ray is then traced pixel by pixel to the light and its depth is compared with the depth at the pixel. If a pixel is in front of the ray, a shadow is casted at the original pixel. At first I thought that I could use the DDA algorithm in 2D to calculate the distance 't' (in p = o + t d, where o origin, d direction) to the next pixel and use it in the 3D ray equation to find the ray's z coordinate at that pixel's position. For the 2D ray, I would use the projected and biased 3D ray direction and origin. The idea was that 't' would be the same in both 2D and 3D equations. Unfortunately, this is not the case since the projection matrix is 4D. Thus, some tweak needs to be done to make this work this way. I would like to ask if someone knows of a way to do what I described above, i.e. from a 2D ray in texture coordinate space to get the 3D ray in screen space. I did implement a simple version of the idea which you can see in the following video: video here Shadows may seem a bit pixelated, but that's mostly because of the size of the step in 't' I chose. And here is the shader: #version 330 core uniform sampler2D DepthMap; uniform vec2 projAB; uniform mat4 projectionMatrix; const vec3 light_p = vec3(-30.0, 30.0, -10.0); noperspective in vec2 pass_TexCoord; smooth in vec3 viewRay; layout(location = 0) out float out_AO; vec3 CalcPosition(void){ float depth = texture(DepthMap, pass_TexCoord).r; float linearDepth = projAB.y / (depth - projAB.x); vec3 ray = normalize(viewRay); ray = ray / ray.z; return linearDepth * ray; } void main(void){ vec3 origin = CalcPosition(); if(origin.z < -60) discard; vec2 pixOrigin = pass_TexCoord; //tex coords vec3 dir = normalize(light_p - origin); vec2 texel_size = vec2(1.0 / 600.0); float t = 0.1; ivec2 pixIndex = ivec2(pixOrigin / texel_size); out_AO = 1.0; while(true){ vec3 ray = origin + t * dir; vec4 temp = projectionMatrix * vec4(ray, 1.0); vec2 texCoord = (temp.xy / temp.w) * 0.5 + 0.5; ivec2 newIndex = ivec2(texCoord / texel_size); if(newIndex != pixIndex){ float depth = texture(DepthMap, texCoord).r; float linearDepth = projAB.y / (depth - projAB.x); if(linearDepth > ray.z + 0.1){ out_AO = 0.2; break; } pixIndex = newIndex; } t += 0.5; if(texCoord.x < 0 || texCoord.x > 1.0 || texCoord.y < 0 || texCoord.y > 1.0) break; } } As you can see, here I just increment 't' by some arbitrary factor, calculate the 3D ray and project it to get the pixel coordinates, which is not really optimal. Hopefully, I would like to optimize the code as much as possible and compare it with shadow mapping and how it scales with the number of lights. PS: Keep in mind that I reconstruct position from depth by interpolating rays through a full screen quad.

    Read the article

  • Process Rules!

    - by Ajay Khanna
    One of the key components of a process is “Business Rule”. Business rule takes many forms inside your process definition and in a way is a manifestation of your company’s business policy. Business rules inside the process are used for policy enforcement, governance, decision management, operations efficiency etc. Following are some basic types of rules that can be a part of your process. 1. Process conditions:  These are defined as the process gateways that determine a path process will take depending on the process parameters. For Example, if discount >10% go to approval path : if discount < 10% auto-approve order. 2. Data rules: These business rules are defined as facts in decision table or knowledge base. The process captures all required parameters and submits those to RETE based rules engine. Rules engine processes the data and returns the result back. For example, rules determining your insurance eligibility. 3. Event rules: Here the system is monitoring the various events and events patterns that are emerging inside the process or external to the process. You can define actions or alerts to be triggered when a certain pattern of events emerges over a specified time period. Such types of rules need Complex Event Processing and are used in applications like Credit Card Fraud detection or Utility Demand Response. 4. User Interface Rules: In order to add dynamic behavior to UI or to keep users from making mistakes and enforcing policy, another mechanism available is UI rules. They are evaluated as the end user is filling out the web forms. These may include enabling and disabling of UI as per business policy. An example could be, if the age of a user is less than 13 years, disable credit card field and enable parental approval required checkbox. Your process may include many of such rule types. Oracle OpenWorld provides a unique opportunity to listen to Oracle Business Process Management Experts and Customers.  We will discuss business rules during various sessions in Oracle OpenWorld. Two of the sessions specifically focused on business rules are listed below: Accelerating an Implementation of Complex Worldwide Business Approval Rules Wednesday, Oct 3, 10:15 AM Moscone South – 305 Oracle Business Rules Use Cases Design and Testing Wednesday, Oct 3, 3:30 PM Marriott Marquis - Golden Gate C3   Oracle Business Process Management Track covers a variety of topics, and speakers covering technology, methodology and best practices. You can see the list of Business process Management sessions here. Come back to this blog for more coverage from Oracle OpenWorld!

    Read the article

  • Stagnating in programming

    - by Coder
    Time after time this question came up in my mind, but up until today I wasn't thinking about it much. I have been programming for maybe around 8 years now, and for the last two years it seems I'm not as keen to pick up new technologies anymore. Maybe that's a burnout or something, but I'd say it's experience and what I like, that's stopping me from running after the latest and greatest. I'm C++ developer, by this I mean, I love close to metal programming. I have no problems tracing problems through assembly, using tools like WinDbg or HexView. When I use constructs, I think about how they are realized underneath, how the bits are set and unset under the hood. I love battling with complex threading problems and doing everything hardcore way, even by hand if the regular solutions seem half baked. But I also love the C++0x stuff, and use it a lot. And all C++ code as long as it's not cumbersome compared to C counterparts, sometimes I also fall back to sort of "Super C" if the C++ way is ugly. And then there are all other developers who seem to be way more forward looking, .Net 4.0 MVC, WPF, all those Microsoft X#s, LINQ languages, XML and XSLT, mobile devices and so on. I have done a considerable amount of .NET, SQL, ASPX programming, but the further I go, the less I want to try those technologies. Is that bad? Almost every day I hear people saying that managed code is the only way forward, WPF is the way to go. I hear that C++ is godawful, and you can't code anything in it that's somewhat stable. But I don't buy it. With the experience I have, and the knowledge of how native code is compiled and executes, I can say I find it extremely rare that C++ code is unstable, or leaks, or causes crashes that takes more than 30 seconds to identify and fix. And to tell the truth, I've seen enough problems with other "cool" languages that I'd say C++ is even more stable and production proof than the safe languages, at least for me. The only thing that scares me in C++ is new frameworks, I don't trust them, and I use them extra sparingly. STL - yes, ATL - very sparingly, everything else... Well, not very keen on it. Most huge problems I've ran into, all were related to frameworks, not the language itself. Some overrided operator here, bad hierarchy there, poor class design here, mystical castings there. Other than that, C/C++ (yes, I use them together) still seems a very controlled and stable way to develop applications. Am I stagnating? Should I switch a profession, or force myself in all that marketing hype? Are there more developers who feel the same way?

    Read the article

  • Nagging As A Strategy For Better Linking: -z guidance

    - by user9154181
    The link-editor (ld) in Solaris 11 has a new feature that we call guidance that is intended to help you build better objects. The basic idea behind guidance is that if (and only if) you request it, the link-editor will issue messages suggesting better options and other changes you might make to your ld command to get better results. You can choose to take the advice, or you can disable specific types of guidance while acting on others. In some ways, this works like an experienced friend leaning over your shoulder and giving you advice — you're free to take it or leave it as you see fit, but you get nudged to do a better job than you might have otherwise. We use guidance to build the core Solaris OS, and it has proven to be useful, both in improving our objects, and in making sure that regressions don't creep back in later. In this article, I'm going to describe the evolution in thinking and design that led to the implementation of the -z guidance option, as well as give a brief description of how it works. The guidance feature issues non-fatal warnings. However, experience shows that once developers get used to ignoring warnings, it is inevitable that real problems will be lost in the noise and ignored or missed. This is why we have a zero tolerance policy against build noise in the core Solaris OS. In order to get maximum benefit from -z guidance while maintaining this policy, I added the -z fatal-warnings option at the same time. Much of the material presented here is adapted from the arc case: PSARC 2010/312 Link-editor guidance The History Of Unfortunate Link-Editor Defaults The Solaris link-editor is one of the oldest Unix commands. It stands to reason that this would be true — in order to write an operating system, you need the ability to compile and link code. The original link-editor (ld) had defaults that made sense at the time. As new features were needed, command line option switches were added to let the user use them, while maintaining backward compatibility for those who didn't. Backward compatibility is always a concern in system design, but is particularly important in the case of the tool chain (compilers, linker, and related tools), since it is a basic building block for the entire system. Over the years, applications have grown in size and complexity. Important concepts like dynamic linking that didn't exist in the original Unix system were invented. Object file formats changed. In the case of System V Release 4 Unix derivatives like Solaris, the ELF (Extensible Linking Format) was adopted. Since then, the ELF system has evolved to provide tools needed to manage today's larger and more complex environments. Features such as lazy loading, and direct bindings have been added. In an ideal world, many of these options would be defaults, with rarely used options that allow the user to turn them off. However, the reality is exactly the reverse: For backward compatibility, these features are all options that must be explicitly turned on by the user. This has led to a situation in which most applications do not take advantage of the many improvements that have been made in linking over the last 20 years. If their code seems to link and run without issue, what motivation does a developer have to read a complex manpage, absorb the information provided, choose the features that matter for their application, and apply them? Experience shows that only the most motivated and diligent programmers will make that effort. We know that most programs would be improved if we could just get you to use the various whizzy features that we provide, but the defaults conspire against us. We have long wanted to do something to make it easier for our users to use the linkers more effectively. There have been many conversations over the years regarding this issue, and how to address it. They always break down along the following lines: Change ld Defaults Since the world would be a better place the newer ld features were the defaults, why not change things to make it so? This idea is simple, elegant, and impossible. Doing so would break a large number of existing applications, including those of ISVs, big customers, and a plethora of existing open source packages. In each case, the owner of that code may choose to follow our lead and fix their code, or they may view it as an invitation to reconsider their commitment to our platform. Backward compatibility, and our installed base of working software, is one of our greatest assets, and not something to be lightly put at risk. Breaking backward compatibility at this level of the system is likely to do more harm than good. But, it sure is tempting. New Link-Editor One might create a new linker command, not called 'ld', leaving the old command as it is. The new one could use the same code as ld, but would offer only modern options, with the proper defaults for features such as direct binding. The resulting link-editor would be a pleasure to use. However, the approach is doomed to niche status. There is a vast pile of exiting code in the world built around the existing ld command, that reaches back to the 1970's. ld use is embedded in large and unknown numbers of makefiles, and is used by name by compilers that execute it. A Unix link-editor that is not named ld will not find a majority audience no matter how good it might be. Finally, a new linker command will eventually cease to be new, and will accumulate its own burden of backward compatibility issues. An Option To Make ld Do The Right Things Automatically This line of reasoning is best summarized by a CR filed in 2005, entitled 6239804 make it easier for ld(1) to do what's best The idea is to have a '-z best' option that unchains ld from its backward compatibility commitment, and allows it to turn on the "best" set of features, as determined by the authors of ld. The specific set of features enabled by -z best would be subject to change over time, as requirements change. This idea is more realistic than the other two, but was never implemented because it has some important issues that we could never answer to our satisfaction: The -z best proposal assumes that the user can turn it on, and trust it to select good options without the user needing to be aware of the options being applied. This is a fallacy. Features such as direct bindings require the user to do some analysis to ensure that the resulting program will still operate properly. A user who is willing to do the work to verify that what -z best does will be OK for their application is capable of turning on those features directly, and therefore gains little added benefit from -z best. The intent is that when a user opts into -z best, that they understand that z best is subject to sometimes incompatible evolution. Experience teaches us that this won't work. People will use this feature, the meaning of -z best will change, code that used to build will fail, and then there will be complaints and demands to retract the change. When (not if) this occurs, we will of course defend our actions, and point at the disclaimer. We'll win some of those debates, and lose others. Ultimately, we'll end up with -z best2 (-z better), or other compromises, and our goal of simplifying the world will have failed. The -z best idea rolls up a set of features that may or may not be related to each other into a unit that must be taken wholesale, or not at all. It could be that only a subset of what it does is compatible with a given application, in which case the user is expected to abandon -z best and instead set the options that apply to their application directly. In doing so, they lose one of the benefits of -z best, that if you use it, future versions of ld may choose a different set of options, and automatically improve the object through the act of rebuilding it. I drew two conclusions from the above history: For a link-editor, backward compatibility is vital. If a given command line linked your application 10 years ago, you have every reason to expect that it will link today, assuming that the libraries you're linking against are still available and compatible with their previous interfaces. For an application of any size or complexity, there is no substitute for the work involved in examining the code and determining which linker options apply and which do not. These options are largely orthogonal to each other, and it can be reasonable not to use any or all of them, depending on the situation, even in modern applications. It is a mistake to tie them together. The idea for -z guidance came from consideration of these points. By decoupling the advice from the act of taking the advice, we can retain the good aspects of -z best while avoiding its pitfalls: -z guidance gives advice, but the decision to take that advice remains with the user who must evaluate its merit and make a decision to take it or not. As such, we are free to change the specific guidance given in future releases of ld, without breaking existing applications. The only fallout from this will be some new warnings in the build output, which can be ignored or dealt with at the user's convenience. It does not couple the various features given into a single "take it or leave it" option, meaning that there will never be a need to offer "-zguidance2", or other such variants as things change over time. Guidance has the potential to be our final word on this subject. The user is given the flexibility to disable specific categories of guidance without losing the benefit of others, including those that might be added to future versions of the system. Although -z fatal-warnings stands on its own as a useful feature, it is of particular interest in combination with -z guidance. Used together, the guidance turns from advice to hard requirement: The user must either make the suggested change, or explicitly reject the advice by specifying a guidance exception token, in order to get a build. This is valuable in environments with high coding standards. ld Command Line Options The guidance effort resulted in new link-editor options for guidance and for turning warnings into fatal errors. Before I reproduce that text here, I'd like to highlight the strategic decisions embedded in the guidance feature: In order to get guidance, you have to opt in. We hope you will opt in, and believe you'll get better objects if you do, but our default mode of operation will continue as it always has, with full backward compatibility, and without judgement. Guidance suggestions always offers specific advice, and not vague generalizations. You can disable some guidance without turning off the entire feature. When you get guidance warnings, you can choose to take the advice, or you can specify a keyword to disable guidance for just that category. This allows you to get guidance for things that are useful to you, without being bothered about things that you've already considered and dismissed. As the world changes, we will add new guidance to steer you in the right direction. All such new guidance will come with a keyword that let's you turn it off. In order to facilitate building your code on different versions of Solaris, we quietly ignore any guidance keywords we don't recognize, assuming that they are intended for newer versions of the link-editor. If you want to see what guidance tokens ld does and does not recognize on your system, you can use the ld debugging feature as follows: % ld -Dargs -z guidance=foo,nodefs debug: debug: Solaris Linkers: 5.11-1.2275 debug: debug: arg[1] option=-D: option-argument: args debug: arg[2] option=-z: option-argument: guidance=foo,nodefs debug: warning: unrecognized -z guidance item: foo The -z fatal-warning option is straightforward, and generally useful in environments with strict coding standards. Note that the GNU ld already had this feature, and we accept their option names as synonyms: -z fatal-warnings | nofatal-warnings --fatal-warnings | --no-fatal-warnings The -z fatal-warnings and the --fatal-warnings option cause the link-editor to treat warnings as fatal errors. The -z nofatal-warnings and the --no-fatal-warnings option cause the link-editor to treat warnings as non-fatal. This is the default behavior. The -z guidance option is defined as follows: -z guidance[=item1,item2,...] Provide guidance messages to suggest ld options that can improve the quality of the resulting object, or which are otherwise considered to be beneficial. The specific guidance offered is subject to change over time as the system evolves. Obsolete guidance offered by older versions of ld may be dropped in new versions. Similarly, new guidance may be added to new versions of ld. Guidance therefore always represents current best practices. It is possible to enable guidance, while preventing specific guidance messages, by providing a list of item tokens, representing the class of guidance to be suppressed. In this way, unwanted advice can be suppressed without losing the benefit of other guidance. Unrecognized item tokens are quietly ignored by ld, allowing a given ld command line to be executed on a variety of older or newer versions of Solaris. The guidance offered by the current version of ld, and the item tokens used to disable these messages, are as follows. Specify Required Dependencies Dynamic executables and shared objects should explicitly define all of the dependencies they require. Guidance recommends the use of the -z defs option, should any symbol references remain unsatisfied when building dynamic objects. This guidance can be disabled with -z guidance=nodefs. Do Not Specify Non-Required Dependencies Dynamic executables and shared objects should not define any dependencies that do not satisfy the symbol references made by the dynamic object. Guidance recommends that unused dependencies be removed. This guidance can be disabled with -z guidance=nounused. Lazy Loading Dependencies should be identified for lazy loading. Guidance recommends the use of the -z lazyload option should any dependency be processed before either a -z lazyload or -z nolazyload option is encountered. This guidance can be disabled with -z guidance=nolazyload. Direct Bindings Dependencies should be referenced with direct bindings. Guidance recommends the use of the -B direct, or -z direct options should any dependency be processed before either of these options, or the -z nodirect option is encountered. This guidance can be disabled with -z guidance=nodirect. Pure Text Segment Dynamic objects should not contain relocations to non-writable, allocable sections. Guidance recommends compiling objects with Position Independent Code (PIC) should any relocations against the text segment remain, and neither the -z textwarn or -z textoff options are encountered. This guidance can be disabled with -z guidance=notext. Mapfile Syntax All mapfiles should use the version 2 mapfile syntax. Guidance recommends the use of the version 2 syntax should any mapfiles be encountered that use the version 1 syntax. This guidance can be disabled with -z guidance=nomapfile. Library Search Path Inappropriate dependencies that are encountered by ld are quietly ignored. For example, a 32-bit dependency that is encountered when generating a 64-bit object is ignored. These dependencies can result from incorrect search path settings, such as supplying an incorrect -L option. Although benign, this dependency processing is wasteful, and might hide a build problem that should be solved. Guidance recommends the removal of any inappropriate dependencies. This guidance can be disabled with -z guidance=nolibpath. In addition, -z guidance=noall can be used to entirely disable the guidance feature. See Chapter 7, Link-Editor Quick Reference, in the Linker and Libraries Guide for more information on guidance and advice for building better objects. Example The following example demonstrates how the guidance feature is intended to work. We will build a shared object that has a variety of shortcomings: Does not specify all it's dependencies Specifies dependencies it does not use Does not use direct bindings Uses a version 1 mapfile Contains relocations to the readonly allocable text (not PIC) This scenario is sadly very common — many shared objects have one or more of these issues. % cat hello.c #include <stdio.h> #include <unistd.h> void hello(void) { printf("hello user %d\n", getpid()); } % cat mapfile.v1 # This version 1 mapfile will trigger a guidance message % cc hello.c -o hello.so -G -M mapfile.v1 -lelf As you can see, the operation completes without error, resulting in a usable object. However, turning on guidance reveals a number of things that could be better: % cc hello.c -o hello.so -G -M mapfile.v1 -lelf -zguidance ld: guidance: version 2 mapfile syntax recommended: mapfile.v1 ld: guidance: -z lazyload option recommended before first dependency ld: guidance: -B direct or -z direct option recommended before first dependency Undefined first referenced symbol in file getpid hello.o (symbol belongs to implicit dependency /lib/libc.so.1) printf hello.o (symbol belongs to implicit dependency /lib/libc.so.1) ld: warning: symbol referencing errors ld: guidance: -z defs option recommended for shared objects ld: guidance: removal of unused dependency recommended: libelf.so.1 warning: Text relocation remains referenced against symbol offset in file .rodata1 (section) 0xa hello.o getpid 0x4 hello.o printf 0xf hello.o ld: guidance: position independent (PIC) code recommended for shared objects ld: guidance: see ld(1) -z guidance for more information Given the explicit advice in the above guidance messages, it is relatively easy to modify the example to do the right things: % cat mapfile.v2 # This version 2 mapfile will not trigger a guidance message $mapfile_version 2 % cc hello.c -o hello.so -Kpic -G -Bdirect -M mapfile.v2 -lc -zguidance There are situations in which the guidance does not fit the object being built. For instance, you want to build an object without direct bindings: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance ld: guidance: -B direct or -z direct option recommended before first dependency ld: guidance: see ld(1) -z guidance for more information It is easy to disable that specific guidance warning without losing the overall benefit from allowing the remainder of the guidance feature to operate: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance=nodirect Conclusions The linking guidelines enforced by the ld guidance feature correspond rather directly to our standards for building the core Solaris OS. I'm sure that comes as no surprise. It only makes sense that we would want to build our own product as well as we know how. Solaris is usually the first significant test for any new linker feature. We now enable guidance by default for all builds, and the effect has been very positive. Guidance helps us find suboptimal objects more quickly. Programmers get concrete advice for what to change instead of vague generalities. Even in the cases where we override the guidance, the makefile rules to do so serve as documentation of the fact. Deciding to use guidance is likely to cause some up front work for most code, as it forces you to consider using new features such as direct bindings. Such investigation is worthwhile, but does not come for free. However, the guidance suggestions offer a structured and straightforward way to tackle modernizing your objects, and once that work is done, for keeping them that way. The investment is often worth it, and will replay you in terms of better performance and fewer problems. I hope that you find guidance to be as useful as we have.

    Read the article

  • Recent improvements in Console Performance

    - by loren.konkus
    Recently, the WebLogic Server development and support organizations have worked with a number of customers to quantify and improve the performance of the Administration Console in large, distributed configurations where there is significant latency in the communications between the administration server and managed servers. These improvements fall into two categories: Constraining the amount of time that the Console stalls waiting for communication Reducing and streamlining the amount of data required for an update A few releases ago, we added support for a configurable domain-wide mbean "Invocation Timeout" value on the Console's configuration: general, advanced section for a domain. The default value for this setting is 0, which means wait indefinitely and was chosen for compatibility with the behavior of previous releases. This configuration setting applies to all mbean communications between the admin server and managed servers, and is the first line of defense against being blocked by a stalled or completely overloaded managed server. Each site should choose an appropriate timeout value for their environment and network latency. In the next release of WebLogic Server, we've added an additional console preference, "Management Operation Timeout", to the Console's shared preference page. This setting further constrains how long certain console pages will wait for slowly responding servers before returning partial results. While not all Console pages support this yet, key pages such as the Servers Configuration and Control table pages and the Deployments Control pages have been updated to support this. For example, if a user requests a Servers Table page and a Management Operation Timeout occurs, the table is displayed with both local configuration and remote runtime information from the responding managed servers and only local configuration information for servers that did not yet respond. This means that a troublesome managed server does not impede your ability to manage your domain using the Console. To support these changes, these Console pages have been re-written to use the Work Management feature of WebLogic Server to interact with each server or deployment concurrently, which further improves the responsiveness of these pages. The basic algorithm for these pages is: For each configuration mbean (ie, Servers) populate rows with configuration attributes from the fast, local mbean server Find a WorkManager For each server, Create a Work instance to obtain runtime mbean attributes for the server Schedule Work instance in the WorkManager Call WorkManager.waitForAll to wait WorkItems to finish, constrained by Management Operation Timeout For each WorkItem, if the runtime information obtained was not complete, add a message indicating which server has incomplete data Display collected data in table In addition to these changes to constrain how long the console waits for communication, a number of other changes have been made to reduce the amount and scope of managed server interactions for key pages. For example, in previous releases the Deployments Control table looked at the status of a deployment on every managed server, even those servers that the deployment was not currently targeted on. (This was done to handle an edge case where a deployment's target configuration was changed while it remained running on previously targeted servers.) We decided supporting that edge case did not warrant the performance impact for all, and instead only look at the status of a deployment on the servers it is targeted to. Comprehensive status continues to be available if a user clicks on the 'status' field for a deployment. Finally, changes have been made to the System Status portlet to reduce its impact on Console page display times. Obtaining health information for this display requires several mbean interactions with managed servers. In previous releases, this mbean interaction occurred with every display, and any delay or impediment in these interactions was reflected in the display time for every page. To reduce this impact, we've made several changes in this portlet: Using Work Management to obtain health concurrently Applying the operation timeout configuration to constrain how long we will wait Caching health information to reduce the cost during rapid navigation from page to page and only obtaining new health information if the previous information is over 30 seconds old. Eliminating heath collection if this portlet is minimized. Together, these Console changes have resulted in significant performance improvements for the customers with large configurations and high latency that we have worked with during their development, and some lesser performance improvements for those with small configurations and very fast networks. These changes will be included in the 11g Rel 1 patch set 2 (10.3.3.0) release of WebLogic Server.

    Read the article

  • Basic web architecture : Perl -> PHP

    - by Sunny Jim
    This is an architecture question. If there is a better forum, please redirect me. Apologies in advance. Essentially every website is built around a relational database, right? When a user uploads form data, that data is stored in a table. The problem is that the table structure(s) need to be modified whenever the website form is modified. Although I understand that modern web frameworks work around this problem by automatically building forms based on the table structure. For the last 20 years, I have been building websites using Perl. When I first encountered this problem, the easiest solution was to save serialized Perl objects as data BLOBS. After XML's introduction, this solution worked even better because XML is so effective for representing arbitrary data. This approach is consistent with the original Perl principles of Hubris, Laziness, and Impatience and I'm pretty committed to it. Obviously, the biggest drawback is that this solution locks me into the Perl interpreter. So instead, I've just completed a prototype of a universal RDB table. The prototype is written in Perl but porting it to PHP will be a good chance to develop those skills. The principal is based on the XML::Dumper module, which converts arbitrary Perl data structures into uniform XML. With my approach, each XML node is stored as a table record. I underestimated this undertaking and rolled something up myself. But the effort allows me to discuss the basic design instead of implementation details. As mentioned, I'm pretty committed to this approach of using flexible data structures. It's been successfully deployed on many websites, large, and complex. But are there any drawbacks I've overlooked? I rolled my own. Are other people taking a similar approach to their data? What kinds of solutions are available? I have not abandoned my dream of eventually contributing something useful to the worldwide community. In order to proceed, the next step would be peer review. How does one pursue that effort? Thanks! -Jim

    Read the article

  • Is it possible to have multiple sets of key columns in a table?

    - by Peter Larsson
    Filtered indexes is one of my new favorite things with SQL Server 2008. I am currently working on designing a new datawarehouse. There are two restrictions doing this It has to be fed from the old legacy system with both historical data and new data It has to be fed from the new business system with new data When we incorporate the new business system, we are going to do that for one market only. It means the old legacy business system still will produce new data for other markets (together with historical data for all markets) and the new business system produce new data to that one market only. Sounds interesting this far? To accomplish this I did a thorough research about the business requirements about the business intelligence needs. Then I went on to design the sucker. How does this relate to filtered indexes you ask? I'll give one example, the Stock transaction table. Well, the key columns for the old legacy system are different from the key columns from the new business system. The old legacy system has a key of 5 columns Movement date Movement time Product code Order number Sequence number within shipment And to all thing, I found out that the Movement Time column is not really a time. It starts out like a time HH:MM:SS but seconds are added for each delivery within the shipment, so a Movement Time can look like "12:11:68". The sequence number is ordered over the distributors for shipment. As I said, it is a legacy system. The new business system has one key column, the Movement DateTime (accuracy down to 100th of nanosecond). So how to deal with this? On thing would be to have two stock transaction tables, one for legacy system and one for the new business system. But that would lead to a maintenance overhead and using partitioned views for getting data out of the warehouse. Filtered index will be of a great use here. MovementDate DATETIME2(7) MovementTime CHAR(8) NULL ProductCode VARCHAR(15) NOT NULL OrderNumber VARCHAR(30) NULL SequenceNumber INT NULL The sequence number is not even used in the new system, so I created a clustered index for a new IDENTITY column to make a new identity column which can be shared by both systems. Then I created one unique filtered index for old system like this CREATE UNIQUE NONCLUSTERED INDEX IX_Legacy (MovementDate, MovementTime, ProductCode, SequenceNumber) INCLUDE (OrderNumber, Col5, Col6, ... ) WHERE SequenceNumber IS NOT NULL And then I created a new unique filtered index for the new business system like this CREATE UNIQUE NONCLUSTERED INDEX IX_Business (MovementDate) INCLUDE (ProductCode, OrderNumber, Col12, ... ) WHERE SequenceNumber IS NULL This way I can have multiple sets of key columns on same base table which is shared by both systems.

    Read the article

  • ArchBeat Link-o-Rama for 2012-05-31

    - by Bob Rhubart
    Eclipse DemoCamp - June 2012 - Redwood Shores, CA wiki.eclipse.org Oracle HQ 10 Twin Dolphin Dr. Redwood Shores, CA Presentations: The evolution of Java persistence, Doug Clarke, EclipseLink Project Lead, Oracle Eclipse Project Sapphire, Konstantin Komissarchik, Sapphire Project Lead, Oracle Developing Rich ADF Applications with Java EE, Greg Stachnick, Oracle Leveraging OSGi In The Enterprise, Kamal Muralidharan, Lead Engineer, eBay NVIDIA Nsight Eclipse Edition, Goodwin (Tech lead - Visual tools), Eugene Ostroukhov (Senior engineer – Visual tools)   BI Architecture Master Class for Partners - Oracle Architecture Unplugged blogs.oracle.com June 21, 2012 This workshop will be highly interactive and is aimed at Oracle OPN member partners who are IT Architects and BI+W specialists. This will be a highly interactive session and does not involve slide presentations or product feature details, it addresses IT-Architectural issues and considerations for the IT-Architect Community. 2012 Oracle Fusion Middleware Innovation Awards - Win a FREE Pass to Oracle OpenWorld 2012 in SF www.oracle.com Share your use of Oracle Fusion Middleware solutions and how they help your organization drive business innovation. You just might win a free pass to Oracle Openworld 2012 in San Francisco. Deadline for submissions in July 17, 2012. IT professionals: Very much the time to change our approach | Andy Mulholland www.capgemini.com This final post by retiring Capgemini CTO blogger Andy Mulholland is a must-read for anyone in IT. 10 Great WebCenter Sites Resources (FatWire) | John Brunswick www.johnbrunswick.com John Brunswick shares "some good resources that span the WebCenter Sites and FatWire brands, to get a consolidated list of helpful destinations for ongoing education." Cloning a WebCenter Portal Managed Server | Maiko Rocha blogs.oracle.com WebCenter and ADF A-Team blogger Maiko Rocha shows how to easily add a new managed server to a single-node domain to make it a cluster. Sorting and Filtering By Model-Based LOV Display Value | Steven Davelaar blogs.oracle.com How-to by WebCenter and ADF A-Team blogger Steven Davelaar. Designing and Developing Cross-Cutting Features | Stephen Rylander www.infoq.com Architects are often tasked with a business feature that must span systems. This article by will provide strategies to handle the change and guide your thinking about separating system boundaries and what that means for your technical design. Thought for the Day "A committee is a group of people who individually can do nothing, but who, as a group, can meet and decide that nothing can be done." — Fred Allen (5/31/1894 – 3/17/1956) Source: Brainy Quote

    Read the article

  • Enterprise 2.0 Conference recap

    - by kellsey.ruppel
    We had a great week in Boston attending the Enterprise 2.0 Conference. We learned a lot from industry thought leaders and had a chance to speak with a lot of different folks about social and collaboration technologies and trends.  Of all the conferences we attend, this one definitely has a different “feel”. It seems like the attendees are younger, they dress hipper, and there is much more livelihood all around. A few of the sessions addressed this, as the "millenials" or Generation Y, have been using Web 2.0 tools, such as Facebook and Twitter for many years now, and as they are entering the workforce they are expecting similar tools to be a part of how they accomplish their job tasks. It's important to note that it's not just Millenials that are expecting these technologies, as workers young and old alike benefit from social and collaboration tools. I’ve highlighted some of the takeaways I had, as well as a reaction from John Brunswick, who helped us in staffing the booth. Giving your employees choices is empowering, but if there is no course of action or plan, it’s useless. There is no such thing as collaboration without a goal. In a few years, social will become a feature in the “platform”, a component of collaboration. Social will become part of the norm – just like email is expected when you start a job at a company, Social will be too. 1 in 3 of your employees are using tools your company doesn't sanction (how scary is this?!) 25,000 pieces of content are created every second. Context is king. Social tools help us navigate and manage the complexities we face with information overload. We need to design products for the way people work. Consumerization of the enterprise - bringing social tools like Facebook to the organization. From John Brunswick: "The conference had solid attendance, standing as a testament to organizations making a concerted effort to understand what social tools exist to support their businesses.  Many vendors were narrowly focused and people we pleasantly surprised at the breadth of capability provided by Oracle WebCenter.  People seemed to feel that it just made sense that social technology provides the most benefit when presented in the context of key business data." Did you attend the conference? What were some of your key takeaways?

    Read the article

  • Why I Love Microsoft Development

    - by Brian Lanham
    I've been writing software for a while and recently had an opportunity to broaden my horizons and start developing for iOS. We decided to leverage, as much as possible, our existing skills and use MonoTouch and MonoDevelop by Novell.    For those of you who do not know, Mono is a .NET port originally designed for Linux but adapted for other platforms as well. MonoTouch is a port specifically for building iOS applications using the .NET framework. MonoDroid is a port (in CTP-esque release) for Android.   A MISSING COMPONENT - VISUAL DESIGNER   MonoDevelop lacks one very significant component compared with other tools I am using: NO VISUAL DESIGNER. Instead of using an integrated visual designer, MonoDevelop shells to the Mac OS "Interface Builder".  Since MonoDevelop lets me have a "Visual Studio-esque" feel *and* I get to use C#, AND it's FREE, I am gladly willing to overlook this.  In fact, it's not even a question.  Free?  Sure, I'll take it with no Visual Designer.   In my experiences I've grown from UNIX and DOS to .NET development through many steps. Java/JSP/Servlets; Windows; Web; etc. I've been doing .NET for quite a few years and I guess I just got "comfortable" with the tools.   WHY AM I NOT GETTING IT?   Interface Builder (IB) is amazingly confusing for me. I had the opportunity to speak at the Northern VA Code Camp on 12/11/2010. My presentation was "Getting Started with iOS Development using MonoTouch and C#".    At the visual design part of the presentation, I asked one of the 3 or 4 Mac developers in the room about my confusion with the IB. I don't understand why the "Classes" list includes objects. I don't understand what "File's Owner" is. And, most importantly, WHAT THE HECK IS AN OUTLET AND WHY IS IT NECESSARY?!?!?"   His response to these question (especially Outlets): "They did it wrong."   I'm accustom to a visual designer that creates variables for graphical widgets for me. Not IB. Instead, I have to create "Outlets" manually. I still do not understand why and, the explanation from a seasoned Mac developer is that it's wrong. (He received nods of confirmation from the other Mac devs in the room.)   I LOVE MS DEV   I love development for Microsoft platforms using Microsoft development tools. I love Windows 7. I love Visual Studio 2010. I love SQL Server. Azure, Entity Framework, Active Directory, Office, WCF/WF/WPF, etc. are all designed with integration in mind. They are also all designed with developers in mind.   Steve Ballmer recently ranted "It's the developers!" That's why it is relatively quick to build apps using MS tools. Clearly, MS knows that while we usually enjoy building technology solutions, we are here to make money. And we need tools that accelerate our time to market without compromising the power and quality of our solutions.   So, yeah, I am sucking up I guess. But I love Microsoft Development. Thank you, Microsoft, for providing the plethora of great development tools.    P.S. (but please slow down a bit…I'm having trouble keeping up!)

    Read the article

  • Business Strategy - Google Case Study

    Business strategy defined by SMBTN.com is a term used in business planning that implies a careful selection and application of resources to obtain a competitive advantage in anticipation of future events or trends. In more general terms business strategy is positioning a company so that it has the greatest competitive advantage over others in the markets and industries that they participate in. This process involves making corporate decisions regarding which markets to provide goods and services, pricing, acceptable quality levels, and how to interact with others in the marketplace. The primary objective of business strategy is to create and increase value for all of its shareholders and stakeholders through the creation of customer value. According to InformationWeek.com, Google has a distinctive technology advantage over its competitors like Microsoft, eBay, Amazon, Yahoo. Google utilizes custom high-performance systems which are cost efficient because they can scale to extreme workloads. This hardware allows for a huge cost advantage over its competitors. In addition, InformationWeek.com interviewed Stephen Arnold who stated that Google’s programmers are 50%-100% more productive compared to programmers working for their competitors.  He based this theory on Google’s competitors having to spend up to four times as much just to keep up. In addition to Google’s technological advantage, they also have developed a decentralized management schema where employees report directly to multiple managers and team project leaders. This allows for the responsibility of the technology department to be shared amongst multiple senior level engineers and removes the need for a singular department head to oversee the activities of the department.  This is a unique approach from the standard management style. Typically a department head like a CIO or CTO would oversee the department’s global initiatives and business functionality.  This would then be passed down and administered through middle management and implemented by programmers, business analyst, network administrators and Database administrators. It goes without saying that an IT professional’s responsibilities would be directed by Google’s technological advantage and management strategy.  Simply because they work within the department, and would have to design, develop, and support the high-performance systems and would have to report multiple managers and project leaders on a regular basis. Since Google was established and driven by new and immerging technology, all other departments would be directly impacted by the technology department.  In fact, they would have to cater to the technology department since it is a huge driving for in the success of Google. Reference: http://www.smbtn.com/smallbusinessdictionary/#b http://www.informationweek.com/news/software/linux/showArticle.jhtml?articleID=192300292&pgno=1&queryText=&isPrev=

    Read the article

  • Drawing isometric map in canvas / javascript

    - by Dave
    I have a problem with my map design for my tiles. I set player position which is meant to be the middle tile that the canvas is looking at. How ever the calculation to put them in x:y pixel location is completely messed up for me and i don't know how to fix it. This is what i tried: var offset_x = 0; //used for scrolling on x var offset_y = 0; //used for scrolling on y var prev_mousex = 0; //for movePos function var prev_mousey = 0; //for movePos function function movePos(e){ if (prev_mousex === 0 && prev_mousey === 0) { prev_mousex = e.pageX; prev_mousey = e.pageY; } offset_x = offset_x + (e.pageX - prev_mousex); offset_y = offset_y + (e.pageY - prev_mousey); prev_mousex = e.pageX; prev_mousey = e.pageY; run = true; } player_posx = 5; player_posy = 55; ct = 19; for (i = (player_posx-ct); i < (player_posx+ct); i++){ //horizontal for (j=(player_posy-ct); j < (player_posy+ct); j++){ // vertical //img[0] is 64by64 but the graphic is 64by32 the rest is alpha space var x = (i-j)*(img[0].height/2) + (canvas.width/2)-(img[0].width/2); var y = (i+j)*(img[0].height/4); var abposx = x - offset_x; var abposy = y - offset_y; ctx.drawImage(img[0],abposx,abposy); } } Now based on these numbers the first render-able tile is I = 0 & J = 36. As numbers in the negative are not in the array. But for I=0 and J= 36 the position it calculates is : -1120 : 592 Does any one know how to center it to canvas view properly?

    Read the article

  • Best depth sorting method for a Top Down 2D game using a 3D physics engine

    - by Alic44
    I've spent many days googling this and still have issues with my game engine I'd like to ask about, which I haven't seen addressed before. I think the problem is that my game is an unusual combination of a completely 2D graphical approach using XNA's SpriteBatch, and a completely 3D engine (the amazing BEPU physics engine) with rotation mostly disabled. In essence, my question is similar to this one (the part about "faux 3D"), but the difference is that in my game, the player as well as every other creature is represented by 3D objects, and they can all jump, pick up other objects, and throw them around. What this means is that sorting by one value, such as a Z position (how far north/south a character is on the screen) won't work, because as soon as a smaller creature jumps on top of a larger creature, or a box, and walks backwards, the moment its z value is less than that other creature, it will appear to be behind the object it is actually standing on. I actually originally solved this problem by splitting every object in the game into physics boxes which MUST have a Y height equal to their Z depth. I then based the depth sorting value on the object's y position (how high it is off the ground) PLUS its z position (how far north or south it is on the screen). The problem with this approach is that it requires all moving objects in the game to be split graphically into chunks which match up with a physical box which has its y dimension equal to its z dimension. Which is stupid. So, I got inspired last night to rewrite with a fresh approach. My new method is a little more complex, but I think a little more sane: every object which needs to be sorted by depth in the game exposes the interface IDepthDrawable and is added to a list owned by the DepthDrawer object. IDepthDrawable contains: public interface IDepthDrawable { Rectangle Bounds { get; } //possibly change this to a class if struct copying of the xna Rectangle type becomes an issue DepthDrawShape DepthShape { get; } void Draw(SpriteBatch spriteBatch); } The Bounds Rectangle of each IDepthDrawable object represents the 2D Axis-Aligned Bounding Box it will take up when drawn to the screen. Anything that doesn't intersect the screen will be culled at this stage and the remaining on-screen IDepthDrawables will be Bounds tested for intersections with each other. This is where I get a little less sure of what I'm doing. Each group of collisions will be added to a list or other collection, and each list will sort itself based on its DepthShape property, which will have access to the object-to-be-drawn's physics information. For starting out, lets assume everything in the game is an axis aligned 3D Box shape. Boxes are pretty easy to sort. Something like: if (depthShape1.Back > depthShape2.Front) //if depthShape1 is in front of depthShape2. //depthShape1 goes on top. else if (depthShape1.Bottom > depthShape2.Top) //if depthShape1 is above depthShape2. //depthShape1 goes on top. //if neither of these are true, depthShape2 must be in front or above. So, by sorting draw order by several different factors from the physics engine, I believe I can get a really correct draw order. My question is, is this a good way of going about this, or is there some tried and true, tested way which is completely different and has somehow completely eluded me on the internets? And, if this does seem like a good way to remake my draw order sorting, what's the right sorting algorithm for reordering the Bounds Rectangle collision lists, and how do you deal with a Bounds Rectangle colliding with two different object which don't collide with eachother. I know these are solved problems, but I've only been programming for a year so any specific input here will be greatly appreciated. Thanks for reading this far, ye who made it -- sorry it was so long!

    Read the article

  • Determining explosion radius damage - Circle to Rectangle 2D

    - by Paul Renton
    One of the Cocos2D games I am working on has circular explosion effects. These explosion effects need to deal a percentage of their set maximum damage to all game characters (represented by rectangular bounding boxes as the objects in question are tanks) within the explosion radius. So this boils down to circle to rectangle collision and how far away the circle's radius is from the closest rectangle edge. I took a stab at figuring this out last night, but I believe there may be a better way. In particular, I don't know the best way to determine what percentage of damage to apply based on the distance calculated. Note : All tank objects have an anchor point of (0,0) so position is according to bottom left corner of bounding box. Explosion point is the center point of the circular explosion. TankObject * tank = (TankObject*) gameSprite; float distanceFromExplosionCenter; // IMPORTANT :: All GameCharacter have an assumed (0,0) anchor if (explosionPoint.x < tank.position.x) { // Explosion to WEST of tank if (explosionPoint.y <= tank.position.y) { //Explosion SOUTHWEST distanceFromExplosionCenter = ccpDistance(explosionPoint, tank.position); } else if (explosionPoint.y >= (tank.position.y + tank.contentSize.height)) { // Explosion NORTHWEST distanceFromExplosionCenter = ccpDistance(explosionPoint, ccp(tank.position.x, tank.position.y + tank.contentSize.height)); } else { // Exp center's y is between bottom and top corner of rect distanceFromExplosionCenter = tank.position.x - explosionPoint.x; } // end if } else if (explosionPoint.x > (tank.position.x + tank.contentSize.width)) { // Explosion to EAST of tank if (explosionPoint.y <= tank.position.y) { //Explosion SOUTHEAST distanceFromExplosionCenter = ccpDistance(explosionPoint, ccp(tank.position.x + tank.contentSize.width, tank.position.y)); } else if (explosionPoint.y >= (tank.position.y + tank.contentSize.height)) { // Explosion NORTHEAST distanceFromExplosionCenter = ccpDistance(explosionPoint, ccp(tank.position.x + tank.contentSize.width, tank.position.y + tank.contentSize.height)); } else { // Exp center's y is between bottom and top corner of rect distanceFromExplosionCenter = explosionPoint.x - (tank.position.x + tank.contentSize.width); } // end if } else { // Tank is either north or south and is inbetween left and right corner of rect if (explosionPoint.y < tank.position.y) { // Explosion is South distanceFromExplosionCenter = tank.position.y - explosionPoint.y; } else { // Explosion is North distanceFromExplosionCenter = explosionPoint.y - (tank.position.y + tank.contentSize.height); } // end if } // end outer if if (distanceFromExplosionCenter < explosionRadius) { /* Collision :: Smaller distance larger the damage */ int damageToApply; if (self.directHit) { damageToApply = self.explosionMaxDamage + self.directHitBonusDamage; [tank takeDamageAndAdjustHealthBar:damageToApply]; CCLOG(@"Explsoion-> DIRECT HIT with total damage %d", damageToApply); } else { // TODO adjust this... turning out negative for some reason... damageToApply = (1 - (distanceFromExplosionCenter/explosionRadius) * explosionMaxDamage); [tank takeDamageAndAdjustHealthBar:damageToApply]; CCLOG(@"Explosion-> Non direct hit collision with tank"); CCLOG(@"Damage to apply is %d", damageToApply); } // end if } else { CCLOG(@"Explosion-> Explosion distance is larger than explosion radius"); } // end if } // end if Questions: 1) Can this circle to rect collision algorithm be done better? Do I have too many checks? 2) How to calculate the percentage based damage? My current method generates negative numbers occasionally and I don't understand why (Maybe I need more sleep!). But, in my if statement, I ask if distance < explosion radius. When control goes through, distance/radius must be < 1 right? So 1 - that intermediate calculation should not be negative. Appreciate any help/advice!

    Read the article

  • Agile PLM Highlights from Oracle OpenWorld 2012

    - by Kerrie Foy
    Thank you to everyone who joined us at Oracle OpenWorld this year, either in person or virtually (thanks for tweeting #oowplm)!  From customer presentations to after-hours networking opportunities, there was a lot to see and do during the entire conference. Sessions It was our pleasure to feature several customer speakers during our PLM sessions at OpenWorld from such companies as Starbucks, Coca-Cola, Facebook, Eli Lilly, and many more.  Each had a unique perspective to share and fascinating insight into how they successfully leverage Agile PLM to facilitate profitable innovation, protect brand integrity, streamline operations, manage compliance, launch faster, etc.  For example, during the Product Value Chain keynote session, CIO Chris Bedi of JDSU shared how they implemented Agile PLM to support business imperatives around rapid innovation, centralizing product information, collaboration, and eliminate the “Excel gymnastics” required to obtain global portfolio visibility. In just 120 days after implementing, JDSU employees reported significant improvements around product record management, new product introduction, engineering collaboration and more, which created a better work environment to enable critical innovation. I could write on and on about the almost 20 sessions! So to spare yourselves, please visit launch.oracle.com/?plmopenworld2012; it’s a curated selection of PLM presentations from the OpenWorld Content Catalog and available on-demand. Enjoy! Agile Innovation Management During OpenWorld, we announced an exciting new addition to the Agile PLM applications called Innovation Management that redefines the industry’s scope of product lifecycle management.  Our broad vision of complete enterprise PLM for the entire Product Value Chain already broke new ground by helping organizations extend PLM disciplines downstream by connecting product design to commercialization processes; now we are helping executives look farther upstream in the early innovation phases to ultimately close the gap between strategy and execution that so commonly nags innovation initiatives.  More on this coming soon so stay tuned! Unique Networking Opportunities  We know it can be challenging during OpenWorld to find time to productively connect and network with your industry peers, so we hosted an Agile PLM “Birds of a Feather” networking brunch for the second year in a row.  At a fine restaurant close to Moscone we hosted nine tables, each with only ten seats to encourage active conversation.  Furthermore, guests could select from a list of predetermined table topics sponsored by a specialized PLM partner to guarantee – even more so – that they were seated with like-minded company and optimizing their time at the conference.  Everyone enjoyed the opportunity to easily connect with other PLM users during OpenWorld in a more casual setting. What’s Next? Thank you again to all who joined us!  If you haven't yet, mark your calendar to join us for the next Oracle Agile PLM conference at the Value Chain Summit in San Francisco, February 4-6 in 2013!  We’ll have 40 sessions of PLM content in four tracks. Don’t miss it! You can sign up to be notified when official registration opens by visiting www.oracle.com/goto/vcs. 

    Read the article

  • Grid pathfinding with a lot of entities

    - by Vee
    I'd like to explain this problem with a screenshot from a released game, DROD: Gunthro's Epic Blunder, by Caravel Games. The game is turn-based and tile-based. I'm trying to create something very similar (a clone of the game), and I've got most of the fundamentals done, but I'm having trouble implementing pathfinding. Look at the screenshot. The guys in yellow are friendly, and want to kill the roaches. Every turn, every guy in yellow pathfinds to the closest roach, and every roach pathfinds to the closest guy in yellow. By closest I mean the target with the shortest path, not a simple distance calculation. All of this without any kind of slowdown when loading the level or when passing turns. And all of the entities change position every turn. Also (not shown in screenshot), there can be doors that open and close and change the level's layout. Impressive. I've tried implementing pathfinding in my clone. First attempt was making every roach find a path to a yellow guy every turn, using a breadth-first search algorithm. Obviously incredibly slow with more than a single roach, and would get exponentially slower with more than a single yellow guy. Second attempt was mas making every yellow guy generate a pathmap (still breadth-first search) every time he moved. Worked perfectly with multiple roaches and a single yellow guy, but adding more yellow guys made the game slow and unplayable. Last attempt was implementing JPS (jump point search). Every entity would individually calculate a path to its target. Fast, but with a limited number of entities. Having less than half the entities in the screenshot would make the game slow. And also, I had to get the "closest" enemy by calculating distance, not shortest path. I've asked on the DROD forums how they did it, and a user replied that it was breadth-first search. The game is open source, and I took a look at the source code, but it's C++ (I'm using C#) and I found it confusing. I don't know how to do it. Every approach I tried isn't good enough. And I believe that DROD generates global pathmaps, somehow, but I can't understand how every entity find the best individual path to other entities that move every turn. What's the trick? This is a reply I just got on the DROD forums: Without having looked at the code I'd wager it's two (or so) pathmaps for the whole room: One to the nearest enemy, and one to the nearest friendly for every tile. There's no need to make a separate pathmap for every entity when the overall goal is "move towards nearest enemy/friendly"... just mark every tile with the number of moves it takes to the nearest target and have the entity chose the move that takes it to the tile with the lowest number. To be honest, I don't understand it that well.

    Read the article

  • Is the Joel Test really a good gauging tool?

    - by henry
    I just learned about the Joel Test. I have been computer programmer for 22 years, but somehow I never heard about it before. I consider my best job so far to be this small investment managing company with 30 employees and only three people in the IT department. I am no longer with them, but I had being working there for five years – my longest streak with any given company. To my surprise they scored extremely poor on the Joel Test. The only two questions I would answer “yes” are #4: Do you have a bug database? And #9: Do you use the best tools money can buy? Everything else is either “sometimes” or straight “no”. Here is what I liked about the company however: Good pay. They bragged about it to my face, and I bragged about it to their face, so it was almost like a family environment. I always knew the big picture. When writing code to solve a particular problem there were no ambiguity about the business nature of that problem. Even though we did not always had written specifications we could ask business users a question anytime, often yelling it across the floor. I could even talk to executives any time I felt like doing it: no appointment necessary. Immediate feedback. Once we implement a solution and make business users happy they immediately let us know that, we (programmers) become heroes of the moment. No red tape. I could always buy any tools I deemed necessary, and design solutions the way my professional judgment dictates. Flexibility. If I had mid-day dental appointment that is near my house rather than near the office, I would send email to the company: "FYI: I work from home today". As long as one of three IT guys was on the floor (to help traders in case their monitors go dark) they did not care where two others were. So the question thus becomes: How valuable is the Joel Test? Why bother with it?

    Read the article

< Previous Page | 613 614 615 616 617 618 619 620 621 622 623 624  | Next Page >