Search Results

Search found 13703 results on 549 pages for 'small teams'.

Page 157/549 | < Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >

  • AIIM, Oracle and Keste - Talking Social Business in LA

    - by Brian Dirking
    We had a great event today in Los Angeles - AIIM, Oracle and Keste presented on how organizations are making social business work. Atle Skjekkeland of AIIM presented How Social Business Is Driving Innovation. Atle talked about a number of fascinating points, such as how answers to questions come from unexpected sources. Atle cited the fact that 38% of organizations get half or more of answers from unexpected sources, which speaks to the wisdom of the crowds and how people are benefiting from open communications tools to get answers to their questions. He also had a number of hilarious examples of companies that don't get it. If Comcast were to go to YouTube and search Comcast, they would see the number one hit after their paid ad is a video of one of their technicians asleep on a customer's couch. Seems when he called the office for support he was put on hold so long he fell asleep. Dan O'Leary and Atle Skjekkeland After Atle's presentation I presented on Solving the Innovation Challenge with Oracle WebCenter. Atle had talked about McKinsey's research titled The Rise Of The Networked Enterprise: Web 2.0 Finds Its Payday. I brought in some new McKinsey research that built on that article. The new article is How Social Technologies Are Extending The Organization. A survey of 4,200 Global Executives brought three conclusions for the future: Boundaries among employees, vendors and customers will blur Employee teams will self-organize Data-driven decisions will rise These three items were themes that repeated through the day as we went through examples of what customers are doing today.  Next up was Vince Casarez of Keste. Vince was scheduled to profile one customer, but in an incredible 3 for 1 deal, Vince profiled Alcatel-Lucent, Qualcomm, and NetApp. Each of these implementations had content consolidation elements, as well as user engagement requirements that Keste was able to address with Oracle WebCenter. Vince Casarez of Keste And we had a couple of good tweets worth reprinting here. danieloleary Daniel O'Leary Learning about user engagement and social platforms from @bdirking #AIIM LA and @oracle event pic.twitter.com/1aNcLEUs danieloleary Daniel O'Leary Users want to be able to share data and activity streams, work at organizations that embrace social via @bdirking skjekkeland Atle Skjekkeland RT @danieloleary: Learning about user engagement and social platforms from @bdirking #AIIM LA and @oracle event pic.twitter.com/EWRYpvJa danieloleary Daniel O'Leary Thanks again to @bdirking for an amazing event in LA today, really impressed with the completeness of web center JimLundy Jim Lundy @ @danieloleary @bdirking yes, it is looking good - Web Center shadrachwhite Shadrach White @ @bdirking @heybenito I heard the #AIIM event in LA was a hit We had some great conversations through they day, many thanks to everyone who joined in. We look forward to continuing the conversation - thanks again to everyone who attended!

    Read the article

  • Working with FusionCharts using ASP.NET

    Nowadays, users are constantly looking for more intuitive user interfaces. Because of this, it is vital to develop ASP.NET applications with diagrams such as Charts. FusionCharts enables you to plug-in several charts from a wide range of sources easily with a small amount of code. In this article, Anand examines the usage of FusionCharts in a step-by-step manner using three different scenarios. He initially examines the plotting of charts using the data from an XML file and also demonstrates the same using the values entered by users. Finally, Anand delves deep into the database connectivity aspects using an Access 2010 database with the help of relevant source code examples and screenshots.

    Read the article

  • Estimating time for planning and technical design using Evidence Based Scheduling

    - by Turgs
    I'm at the beginning of a development project in a large organization. The Functional Requirements are currently being worked out and documented with our business stakeholders by our Enterprise Design department. I'm required to produce Technical Design Documents and manage the team to actually build the solution. I'm wanting to try Evidence Based Scheduling, but as I understand, part of that is breaking the job down into small tasks that are less than 14 hours in duration, which requires me to have already done the Technical Design. Therefore, can Evidence Based Scheduling only be used after the Technical Design has been done? How do you then plan and estimate the time it may take to come up with the Technical Design?

    Read the article

  • Oracle Unveils Oracle Fusion Tap for the iPad

    - by Richard Lefebvre
    Oracle Fusion Tap: Productivity Amplified Anywhere, Anytime Oracle today announced the availability of Oracle Fusion Tap, a native iPad application that redefines the level of productivity users can achieve while on-the-go.   Oracle Fusion Tap runs off cloud-based enterprise applications and across Oracle Application Cloud Services, requiring only one simple Apple App Store installation.   Automatically personalized to each user, Oracle Fusion Tap gives users exactly what they need at their fingertips and provides the long-sought, key functionalities to remain productive and to keep business moving, even when away from the desk.   Designed specifically for the iPad and the mobile workforce, Oracle Fusion Tap provides access with or without an Internet connection.   By grouping functional capabilities into three core areas of "connect," "analyze," and "work," users can easily and directly connect with what they need in the app, complete activities, and move on.   As organizations strive for a lean and agile workforce, Oracle Fusion Tap helps users find and make connections with the right people at the right time, obtaining answers to questions quickly and removing roadblocks faster.   Oracle Fusion Tap also provides users with secure access to actionable performance indicators and day-to-day management of their workforce and sales force automation. Supporting Quotes "Both the enterprise and technology providers must recognize the need to innovate and adapt for the increasing mobility of the workforce—not just for sales teams, but across the organization," said Carter Lusher, Research Fellow and Chief Analyst of Enterprise Applications Ecosystem, Ovum. "A mobile application that quickly and powerfully allows employees to make connections, analyze data, and complete activities at any time and wherever they may be located drives new levels of business value and enhances efficiency. Frankly, mobile access is no longer a 'nice to have' but a 'must have.'"   "The mobile workforce is a business reality, and Oracle Fusion Tap is an example of how Oracle delivers mobile and cloud innovations that fundamentally improve productivity and how we work," said Chris Leone, Senior Vice President of Application Development, Oracle. "With Oracle Fusion Tap users will have an all-in-one, easily extensible app that puts mission-critical data and colleague connection at their fingertips." Supporting Resources Oracle Fusion Tap Oracle Fusion Tap on App Store Oracle Fusion Tap YouTube Video Oracle CRM on Social Media @OracleCRM OracleCRM on Facebook OracleCRM on YouTube

    Read the article

  • Rapid prototyping and refactoring

    - by Puckl
    Sometimes when I start a small project (like an android app), I don´t know which approach will work out at the end, and I just go for one approach and give it a try. But if I never used this approach before (for a sort of application I´ve never programmed before) it is like stepping into unknown terrain. I don´t know which libraries to use (maybe I have to try out several libraries) and there are so many unkonwns (like: how to get raw audio data in android) So then my development process goes like this: Write a piece of code to see if the approach has a chance. (The more uncertain the approach is, the uglier the code gets) If it works, refactor a lot until it is beautiful I think it could be a waste of time if I planned my software design in detail at this point, it would be like planning a trip without a map. Is this part of aglie development? How do you deal with unknown terrain in software development?

    Read the article

  • Square Reader Modified to Record Off Old Reel-to-Reel Tape [Video]

    - by Jason Fitzpatrick
    The Square Reader is a tiny magnetic credit card reader that has taken the mobile payment industry by storm. This clever hack dumps the credit card reading in favor of snagging the audio from old music reels. Evan Long was curious about whether the through-the-headphones interface of the Square Reader could be used to read audio data off old magnetic recordings. With a very small modification (he had to bend a metal tab inside the reader to allow the audio tape to slide through more easily) he was able to listen to and record audio off old reels. Watch the video above to see it in action or hit up the link below to read more about his project. iPod Meets Reel [via Make] HTG Explains: What Is Windows RT and What Does It Mean To Me? HTG Explains: How Windows 8′s Secure Boot Feature Works & What It Means for Linux Hack Your Kindle for Easy Font Customization

    Read the article

  • What do you do to get your software design robust, flexible and clear?

    - by Oscar
    I am still getting mature as a software engineering/designer/architect, as you may want to call. At this point in time, I am getting small projects, private projects and so on. What I noticed is that even though I think about the SW structure, design some diagrams, have they really clear in my mind when I start coding, at the end, my software is not flexible and clear as I would like to. I would like to ask you what kind of approaches, mechanisms or even tricks do you use, to get your software (and SW design) flexible, robust and clear (easy to understand and use). So.... Any ideas to give to a beginner?

    Read the article

  • Maintaining packages with code - Adding a property expression programmatically

    Every now and then I've come across scenarios where I need to update a lot of packages all in the same way. The usual scenario revolves around a group of packages all having been built off the same package template, and something needs to updated to keep up with new requirements, a new logging standard for example.You'd probably start by updating your template package, but then you need to address all your existing packages. Often this can run into the hundreds of packages and clearly that's not a job anyone wants to do by hand. I normally solve the problem by writing a simple console application that looks for files and patches any package it finds, and it is an example of this I'd thought I'd tidy up a bit and publish here. This sample will look at the package and find any top level Execute SQL Tasks, and change the SQL Statement property to use an expression. It is very simplistic working on top level tasks only, so nothing inside a Sequence Container or Loop will be checked but obviously the code could be extended for this if required. The code that actually sets the expression is shown below, the rest is just wrapper code to find the package and to find the task. /// <summary> /// The CreationName of the Tasks to target, e.g. Execute SQL Task /// </summary> private const string TargetTaskCreationName = "Microsoft.SqlServer.Dts.Tasks.ExecuteSQLTask.ExecuteSQLTask, Microsoft.SqlServer.SQLTask, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"; /// <summary> /// The name of the task property to target. /// </summary> private const string TargetPropertyName = "SqlStatementSource"; /// <summary> /// The property expression to set. /// </summary> private const string ExpressionToSet = "@[User::SQLQueryVariable]"; .... // Check if the task matches our target task type if (taskHost.CreationName == TargetTaskCreationName) { // Check for the target property if (taskHost.Properties.Contains(TargetPropertyName)) { // Get the property, check for an expression and set expression if not found DtsProperty property = taskHost.Properties[TargetPropertyName]; if (string.IsNullOrEmpty(property.GetExpression(taskHost))) { property.SetExpression(taskHost, ExpressionToSet); changeCount++; } } } This is a console application, so to specify which packages you want to target you have three options: Find all packages in the current folder, the default behaviour if no arguments are specified TaskExpressionPatcher.exe .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Find all packages in a specified folder, pass the folder as the argument TaskExpressionPatcher.exe C:\Projects\Alpha\Packages\ .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Find a specific package, pass the file path as the argument TaskExpressionPatcher.exe C:\Projects\Alpha\Packages\Package.dtsx The code was written against SQL Server 2005, but just change the reference to Microsoft.SQLServer.ManagedDTS to be the SQL Server 2008 version and it will work fine. If you get an error Microsoft.SqlServer.Dts.Runtime.DtsRuntimeException: The package failed to load due to error 0xC0011008… then check that the package is from the correct version of SSIS compared to the referenced assemblies, 2005 vs 2008 in other words. Download Sample Project TaskExpressionPatcher.zip (6 KB)

    Read the article

  • How to refresh Google Reader cache for a specific domain

    - by Renan
    Brief history domain.com was associated with a blogspot. domain.com changed to an institutional site and features a small code instructing which file should be retrieved by RSS readers. <link rel="alternate" type="application/rss+xml" href="http://domain.tumblr.com/rss" /> Google Reader didn't update the RSS to the new blog. domain.com/blog retrieves the posts correctly because it was never used for that purpose in the older blog. How is it possible to force Google Reader to update the cached information? I tried using another RSS reader and it worked perfectly with the new domain. However, when I tried to follow domain.com in another Google Reader account, it still showed the posts from the older blog. It's been almost a month that the aforementioned changes were made.

    Read the article

  • Sound Waves Visualized with a Chladni Plate and Colored Sand [Video]

    - by Jason Fitzpatrick
    This eye catching demonstration combines a Chladni Plate, four piles of colored sand, and a rubber mallet to great effect–watch as the plate vibrates pattern after pattern into the sand. A Chladni Plate, named after physicist Ernst Chladni, is a steel plate that vibrates when rubbed with a rubber ball-style mallet. Different size balls create different frequencies and each frequency creates a different pattern in the sand placed atop the plate. Watch the video above to see how rubber balls, large and small, change the patterns. [via Neatorama] Secure Yourself by Using Two-Step Verification on These 16 Web Services How to Fix a Stuck Pixel on an LCD Monitor How to Factory Reset Your Android Phone or Tablet When It Won’t Boot

    Read the article

  • Who organizes your Matlab code?

    - by KE
    After reading How to organize MATLAB code?, I had a follow up question. If you work in a group of Matlab programmers, who enforces the organization of the shared Matlab code and project matfiles? For example do you have a dedicated Matlab IT person, or does the most senior programmer issue guidelines that everyone must follow, or does everyone agree to follow a system? In my small group, each person has their own 'system'. Matlab code and project matfiles are either piled into a shared drive or tucked away on people's own computers. Hard to recreate work done by another person, or even to locate their code. There were lots of good suggestions on how to get organized. But it seems like someone has to make the trains run on time. Who does it in your group?

    Read the article

  • Book &ldquo;Team Foundation Server 2012 Starter&rdquo; published

    - by terje
    During the summer and fall this year, me and my colleague Jakob Ehn has worked together on a book project that has now finally hit the stores! The title of the book is Team Foundation Server 2012 Starter and is published by Packt Publishing. Get it from http://www.packtpub.com/team-foundation-server-2012-starter/book or from Amazon http://www.amazon.com/dp/1849688389                     The book is part of a concept that Packt have with starter-books, intended for people new to Team Foundation Server 2012 and who want a quick guideline to get it up and working.  It covers the fundamentals, from installing and configuring it, and how to use it with source control, work items and builds. It is done as a step-by-step guide, but also includes best practices advice in the different areas. It covers the use of both the on-premises and the TFS Services version. It also has a list of links and references in the end to the most relevant Visual Studio 2012 ALM sites. Our good friend and fellow ALM MVP Mathias Olausson have done the review of the book, thanks again Mathias! We hope the book fills the gap between the different online guide sites and the more advanced books that are out. Book Description Your quick start guide to TFS 2012, top features, and best practices with hands on examples Overview Install TFS 2012 from scratch Get up and running with your first project Streamline release cycles for maximum productivity In Detail Team Foundation Server 2012 is Microsoft's leading ALM tool, integrating source control, work item and process handling, build automation, and testing. This practical "Team Foundation Server 2012 Starter Guide" will provide you with clear step-by-step exercises covering all major aspects of the product. This is essential reading for anyone wishing to set up, organize, and use TFS server. This hands-on guide looks at the top features in Team Foundation Server 2012, starting with a quick installation guide and then moving into using it for your software development projects. Manage your team projects with Team Explorer, one of the many new features for 2012. Covering all the main features in source control to help you work more efficiently, including tools for branching and merging, we will delve into the Agile Planning Tools for planning your product and sprint backlogs. Learn to set up build automation, allowing your team to become faster, more streamlined, and ultimately more productive with this "Team Foundation Server 2012 Starter Guide". What you will learn from this book Install TFS 2012 on premise Access TFS Services in the cloud Quickly get started with a new project with product backlogs, source control, and build automation Work efficiently with source control using the top features Understand how the tools for branching and merging in TFS 2012 help you isolate work and teams Learn about the existing process templates, such as Visual Studio Scrum 2.0 Manage your product and sprint backlogs using the Agile planning tools Approach This Starter guide is a short, sharp introduction to Team Foundation Server 2012, covering everything you need to get up and running. Who this book is written for If you are a developer, project lead, tester, or IT administrator working with Team Foundation Server 2012 this guide will get you up to speed quickly and with minimal effort.

    Read the article

  • Full text search with Sphider

    - by Ravi Gupta
    I am searching for a good, light weight, open source, full text search engine for php. I came across a number of options like Lucene, Zend Lucene, Solr etc but at the same time I also find out many people suggesting Sphider for small/medium side websites. I looked at shipder website a lot but unable to find out how to use it as a Full Text Search Engine.If anybody worked on it could help me to figure out whether it supports full text search or not. Edit: Please don't suggest any other alternatives for full text search.

    Read the article

  • Game Engine that allows for objects being placed in-game

    - by user185812
    I am looking for a game engine with multiplayer support that allows for players to place objects in the terrain. (eg. in TF2 one can place teleporters, etc...or in minecraft one can place blocks). I don't need the placeable objects to be interactive like in TF2, but I just need an engine that won't make me code this from scratch. I have decent knowledge of Python, PHP, HTML, C++ and C# (and a small knowledge of Lua scripting, although I have only been at it for a few months) so I should be able to handle most engines. So far I have looked at UDK and Cryengine, and wasn't thrilled with either.

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • So what are zones really?

    - by Bertrand Le Roy
    There is a (not so) particular kind of shape in Orchard: zones. Functionally, zones are places where other shapes can render. There are top-level zones, the ones defined on Layout, where widgets typically go, and there are local zones that can be defined anywhere. These local zones are what you target in placement.info. Creating a zone is easy because it really is just an empty shape. Most themes include a helper for it: Func<dynamic, dynamic> Zone = x => Display(x); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } With this helper, you can create a zone by simply writing: @Zone(Model.Header) .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Let's deconstruct what's happening here with that weird Lambda. In the Layout template where we are working, the Model is the Layout shape itself, so Model.Header is really creating a new Header shape under Layout, or getting a reference to it if it already exists. The Zone function is then called on that object, which is equivalent to calling Display. In other words, you could have just written the following to get the exact same effect: @Display(Model.Header) The Zone helper function only exists to make the intent very explicit. Now here's something interesting: while this works in the Layout template, you can also make it work from any deeper-nested template and still create top-level zones. The difference is that wherever you are, Model is not the layout anymore so you need to access it in a different way: @Display(WorkContext.Layout.Header) This is still doing the exact same thing as above. One thing to know is that for top-level zones to be usable from the widget editing UI, you need one more thing, which is to specify it in the theme's manifest: Name: Contoso Author: The Orchard Team Description: A subtle and simple CMS themeVersion: 1.1 Tags: business, cms, modern, simple, subtle, product, service Website: http://www.orchardproject.net Zones: Header, Navigation, HomeFeaturedImage, HomeFeaturedHeadline, Messages, Content, ContentAside, TripelFirst, TripelSecond, TripelThird, Footer Local zones are just ordinary shapes like global zones, the only difference being that they are created on a deeper shape than layout. For example, in Content.cshtml, you can find our good old code fro creating a header zone: @Display(Model.Header) The difference here is that Model is no longer the Layout shape, so that zone will be local. The name of that local zone is what you specify in placement.info, for example: <Place Parts_Common_Metadata_Summary="Header:1"/> Now here's the really interesting part: zones do not even know that they are zones, and in fact any shape can be substituted. That means that if you want to add new shapes to the shape that some part has been emitting from its driver for example, you can absolutely do that. And because zones are so barebones as shapes go, they can be created the first time they are accessed. This is what enables us to add shapes into a zone before the code that you would think creates it has even run. For example, in the Layout.cshtml template in TheThemeMachine, the BadgeOfHonor shape is being injected into the Footer zone on line 47, even though that zone will really be "created" on line 168.

    Read the article

  • Synopsis : Configure WebCenter PS5 with WebCenter Content - Good Example

    - by Vikram Kurma
    In a typical business scenario we often need to display assets like pages , images from Webcenter Content in our portal applications.  WebCenter Portal applications provides you a way to integrate content through Jdeveloper where you can browse and consume the assets from Webcenter content .  In the latest PS5 version , there is a small change to enable this feature . If this is not done properly you would see that the connection is successful but it doesn't allow you to browse through the assets . SEVERE: Could not list contents of folder with ID = dCollectionID:-1oracle.stellent.ridc.protocol.ServiceException: No service defined for COLLECTION_DISPLAY. Don't worry we are here to help you out on this   .  Read on for the solution here

    Read the article

  • How can I merge two SubVersion branches to one working copy without committing?

    - by Eric Belair
    My current SubVersion workflow is like so: The trunk is used to make small content changes and bug fixes to the main source code. Branches are used for adding/editing enhancements and projects. So, trunk changes are made, tested, committed and deployed pretty quickly. Whereas, enhancements and projects need additional user testing and approval. At time, I have two branches that need testing and approval at the same time. I don't want to merge to the trunk and commit until the changes are fully tested and approved. What I need to do is merge both branches to one working copy without any commits. I am using Tortoise SVN, and when I try to merge the second branch, I get an error message: Cannot merge into a working copy that has local modifications Is there a way that I can do this without committing either merge?

    Read the article

  • Should I be paid for time spent learning a framework?

    - by nate-bit
    To give light to the situation: I am currently one of two programmers working in a small startup software company. Part of my job requires me to learn a Web development framework that I am not currently familiar with. I get paid by the hour. So the question is: Is it wholly ethical to spend multiple hours of the day reading through documentation and tutorials and be paid for this time where I am not actively developing for our product? Or should the bulk of this learning be done at home, or otherwise off hours, to allow for more full-on development of our application during the work day?

    Read the article

  • How to name an subclass that add a minor, detailed thing?

    - by Louis Rhys
    What is the most concise (yet descriptive) way of naming a subclass that only add a specific minor thing to the parent? I encountered this case a lot in WPF, where sometime I have to add a small functionality to an out-of-the-box control for specific cases. Example: TreeView doesn't change the SelectedItem on right-click, but I have to make one that does in my application. Some possible names are TreeViewThatChangesSelectedItemOnRightClick (way too wordy and maybe difficult to read because there is so many words concantenated together) TreeView_SelectedItemChangesOnRightClick (slightly more readable, but still too wordy and the underscore also breaks the normal convention for class names) TreeViewThatChangesSIOnRC (non-obvious acronym), ExtendedTreeView (more concise, but doesn't describe what it is doing. Besides, I already found a class called this in the library, that I don't want to use/modify in my application). LouisTreeView, MyTreeView, etc. (doesn't describe what it is doing). It seems that I can't find a name which sounds right. What do you do in situation like this?

    Read the article

  • Understanding the JSF Lifecycle and ADF Optimized Lifecycle

    - by Steven Davelaar
    While coaching ADF development teams over the years, I have noticed that many developers lack a basic understanding of Java Server Faces, in particular the JSF lifecycle and how ADF optimizes this lifecycle in specific situations. As a result, ADF developers who are tasked to build a seemingly simple ADF page, can get extremely frustrated by the -in their eyes- unexpected or unlogical behavior of ADF.  They start to play with the immediate property and the partialTriggers property in a trial-and-error manner. Often, they play with these properties until their specific issue is solved, unaware of other more severe bugs that might be introduced by the values they choose for these properties. So, I decided to submit a presentation for the UKOUG entitled "What you need to know about JSF to be succesful with ADF".  The abstract was accepted, and I started putting together the presentation and demo application. I built up a demo application step-by-step, trying to cover the JSF-related  top issues and challenges I encountered over the years in a simple "Hello World" demo. This turned out to be both a very time-consuming and very interesting journey. I had never thought I would learn so much myself in preparing this presentation. I never thought I would end up with potentially controversial conclusions like "Never set immediate=true on an editable component".  I did not realize the sometimes immense implications of the ADF optimized lifecycle beforehand. I never thought that "Hello World" demo's could get so complex. But as I went on I was confident this was valuable material, even for experienced ADF developers with a good understanding of JSF. When I finished, I realized the original title and abstract was misleading, as was the target audience. Yes, it was covering the JSF lifecycle, but no other aspects of JSF you need to know for ADF development. Yes, it was covering some JSF basics as mentioned in the abstract, but all in all it had become a pretty advanced presentation. At the same time, the issues discussed are very common, novice ADF developers might easily run into them while building their first pages. I ran out of time, so I decided to just present what I had, apologizing at the beginning for the misleading title, showing a second slide with a better title "18 invaluable lessons about ADF-JSF interaction". I think the presentation was well received overall, although people who don't like it or don't understand it, usually don't come and tell you afterwards.... I am still struggling with the title, for this blog post I used yet another title, anyway, you can download the presentation-that-still-lacks-a-good-title here. The finished JDev 11.1.1.6 demo app can be downloaded here.  The 18 lessons mentioned in the presentation are summarized here. As mentioned on the last slide, print out the lessons, and learn them by heart, I am pretty sure it will save you lots of time and frustration!

    Read the article

  • ASP.NET 3.5 Debugging Using Visual Web Developer Express 2008

    One of the most important features in Visual Web Developer Express 2 8 in developing ASP.NET 3.5 websites is the debugging feature. Having a debugger is important in troubleshooting source code and application-related problems. It will save you a lot of time if you encounter and fix problems during the design and testing stage. This article is all about basic debugging in ASP.NET using Visual Web Developer Express its information will provide you with an important tool for designing and creating ASP.NET websites.... Cloud Servers in Demand - GoGrid Start Small and Grow with Your Business. $0.10/hour

    Read the article

  • SQL SERVER – Video – Performance Improvement in Columnstore Index

    - by pinaldave
    I earlier wrote an article about SQL SERVER – Fundamentals of Columnstore Index and it got very well accepted in community. However, one of the suggestion I keep on receiving for that article is that many of the reader wanted to see columnstore index in the action but they were not able to do that. Some of the readers did not install SQL Server 2012 or some did not have good machine to recreate the big table involved in the demo. For the same reason, I have created small video for that. I have written two more article on columstore index. Please read them as followup to the video: SQL SERVER – How to Ignore Columnstore Index Usage in Query SQL SERVER – Updating Data in A Columnstore Index Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • How to go automatically from Suspend into Hibernate?

    - by Sergey Stadnik
    Is it possible to make Ubuntu go into Hibernate state from Suspend, aka "Suspend Sedation"? For example, my laptop is set up to go into a Suspend once I close the lid. If then I don't use it for entire day, the battery goes flat, because even in suspend mode the hardware still consumes a small amount of power, and the battery eventually discharges. What I want is to be able to tell Ubuntu that even if it is suspended, it still needs to go into Hibernate after some hours of inactivity. Windows can do that. Ubuntu can be programmed to go into Standby or Hibernate on timer, but not both. Update: I guess I need to be more specific. What I am looking for is this: When I close the lid, the laptop is put into Suspend. Then, after a pre-determined time (even if the battery is going strong) if I still don't use it, it should put itself into a Hibernate to save battery power.

    Read the article

  • BASH Scripting: Check If running with sudo/superuser, if not, dont run, return error

    - by EvilPhoenix
    This is something I've been curious about. I make a lot of small bash scripts (.sh files) to do tasks that I routinely do. Some of those tasks require everything to be ran as superuser. I've been curious: Is it possible to, within the BASH script prior to everything being run, check if the script is being run as superuser, and if not, print a message saying You must be superuser to use this script, then subsequently terminate the script itself. The other side of that is I'd like to have the script run when the user is superuser, and not generate the error. Any ideas on coding (if statements, etc.) on how to execute the aforementioned?

    Read the article

< Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >