Search Results

Search found 15591 results on 624 pages for 'problems'.

Page 413/624 | < Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >

  • Programming and Ubiquitous Language (DDD) in a non-English domain

    - by Sandor Drieënhuizen
    I know there are some questions already here that are closely related to this subject but none of them take Ubiquitous Language as the starting point so I think that justifies this question. For those who don't know: Ubiquitous Language is the concept of defining a (both spoken and written) language that is equally used across developers and domain experts to avoid inconsistencies and miscommunication due to translation problems and misunderstanding. You will see the same terminology show up in code, conversations between any team member, functional specs and whatnot. So, what I was wondering about is how to deal with Ubiquitous Language in non-English domains. Personally, I strongly favor writing programming code in English completely, including comments but ofcourse excluding constants and resources. However, in a non-English domain, I'm forced to make a decision either to: Write code reflecting the Ubiquitous Language in the natural language of the domain. Translate the Ubiquitous Language to English and stop communicating in the natural language of the domain. Define a table that defines how the Ubiquitous Language translates to English. Here are some of my thoughts based on these options: 1) I have a strong aversion against mixed-language code, that is coding using type/member/variable names etc. that are non-English. Most programming languages 'breathe' English to a large extent and most of the technical literature, design pattern names etc. are in English as well. Therefore, in most cases there's just no way of writing code entirely in a non-English language so you end up with mixed languages anyway. 2) This will force the domain experts to start thinking and talking in the English equivalent of the UL, something that will probably not come naturally to them and therefore hinders communication significantly. 3) In this case, the developers communicate with the domain experts in their native language while the developers communicate with each other in English and most importantly, they write code using the English translation of the UL. I'm sure I don't want to go for the first option and I think option 3 is much better than option 2. What do you think? Am I missing other options? UPDATE Today, about year later, having dealt with this issue on a daily basis, I have to say that option 3 has worked out pretty well for me. It wasn't as tedious as I initially feared and translating in real time while talking to the client wasn't a problem either. I also found the following advantages to be true, based on my experience. Translating the UL makes you pay more attention to defining the UL and even the domain itself, especially when you don't know how to translate a term and you have to start looking through dictionaries etc. This has even caused me to reconsider domain modeling decisions a few times. It helps you make your knowledge of the English language more profound. Obviously, your code is much more pleasant to look at instead of being a mind boggling obscenity.

    Read the article

  • Ubuntu boots to terminal on start up

    - by Jules
    For a long time I've been unable to get updates due to a "repositories not found" error. Yesterday someone fixed this for me but after installing 94 days worth of updates my system wanted to restart. It looks like it is booting normally but then it opens a terminal and asks for my login and password. I had tried Ctrl+ Alt +F7 and startx to no avail. Here is everything that appears on screen when I turn the computer on. Ubuntu 10.04.4 LTS box-o-doom tty1 box-o-doom login:julian password: last login: Sun Jul 8 10:28:02 BST tty1 Linux box-o-doom 2.6.32-41-generic-pae #91-Ubuntu SMP Wed Jun 13 12:00:09 UTC 20 12 i686 GNU/Linux Ubuntu 10.04.4 LTS Welcome to Ubuntu! *Documentation: http://help.ubuntu.com julian@box-o-doom:~$_ i then tried dmesg which produced hundreds of lines all very similar to the first line reproduced here [ 9.453119] type=1505 audit1341742405.022:10): operation="profile_replace" pid=743 name="/usr/lib/connman/scripts/dhclient-script" follwed by this at the end [ 9.475880] alloc irq_desc for 27 on node-1 [ 9.475883] alloc kstat_irqs on node-1 [ 9.475890]forcedeth 0000:00:07.0: irq27 for MSI/MSI-X [ 9.760031] hda_code:ALC662 rev1: BIOS auto-probing. [ 10.048095] input:HDA Digital PCBeep as /devices/pci 0000:00:05.o/inp ut/input6 [ 10.862278] ppdev: user-space parallel port driver [ 20.268018] eth0: no IPv6 routers present julian@box-o-doom:~$_ results of startx lots of text scrolls off the screen and i have no way of reading it. but everything i can see is reproduced below current version of pixman: 0.16.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version Markers: (--) probed, (**) from config file, (==) defult setting, (++) from command line, (!!) notice, (II) informational. (WW) Warning, (EE) error, (NI) not implemented, (??) unknown. (==) log file: "/var/log/Xorg.0.log", Time: SUn Jul 8 12:02:23 2012 (==) using config file: "/etc/X11/xorg.conf" (==)using config directory: "/usr/lib/X11/xorg.conf.d" FATAL: Module nvidia not found. (EE) NVIDIA: Failed to load the NVIDIA kernal module please check your (EE) NVIDIA: systems kernal log for aditional error messages. (EE) Failed to load module "nvidia" (module specific error, 0) (EE) No drivers available. Fatal server error: no screens found please consult the X.org foundation support at http://wiki.x.org for help please also check the log files at "/var/log/X.org.0.log" for aditional informati on ddxSigGiveUp: Closing log giving up xinit: No such file or directory (errno 2): unable to connect to X server xinit: No suck process (errno 3): server error julian@box-o-doom:~$_

    Read the article

  • Kauffman Foundation Selects Stackify to Present at Startup@Kauffman Demo Day

    - by Matt Watson
    Stackify will join fellow Kansas City startups to kick off Global Entrepreneurship WeekOn Monday, November 12, Stackify, a provider of tools that improve developers’ ability to support, manage and monitor their enterprise applications, will pitch its technology at the Startup@Kauffman Demo Day in Kansas City, Mo. Hosted by the Ewing Marion Kauffman Foundation, the event will mark the start of Global Entrepreneurship Week, the world’s largest celebration of innovators and job creators who launch startups.Stackify was selected through a competitive process for a six-minute opportunity to pitch its new technology to investors at Demo Day. In his pitch, Stackify’s founder, Matt Watson, will discuss the current challenges DevOps teams face and reveal how Stackify is reinventing the way software developers provide application support.In October, Stackify had successful appearances at two similar startup events. At Tech Cocktail’s Kansas City Mixer, the company was named “Hottest Kansas City Startup,” and it won free hosting service after pitching its solution at St. Louis, Mo.’s Startup Connection.“With less than a month until our public launch, events like Demo Day are giving Stackify the support and positioning we need to change the development community,” said Watson. “As a serial technology entrepreneur, I appreciate the Kauffman Foundation’s support of startup companies like Stackify. We’re thrilled to participate in Demo Day and Global Entrepreneurship Week activities.”Scheduled to publicly launch in early December 2012, Stackify’s platform gives developers insights into their production applications, servers and databases. Stackify finally provides agile developers safe and secure remote access to look at log files, config files, server health and databases. This solution removes the bottleneck from managers and system administrators who, until now, are the only team members with access. Essentially, Stackify enables development teams to spend less time fixing bugs and more time creating products.Currently in beta, Stackify has already been named a “Company to Watch” by Software Development Times, which called the startup “the next big thing.” Developers can register for a free Stackify account on Stackify.com.###Stackify Founded in 2012, Stackify is a Kansas City-based software service provider that helps development teams troubleshoot application problems. Currently in beta, Stackify will be publicly available in December 2012, when agile developers will finally be able to provide agile support. The startup has already been recognized by Tech Cocktail as “Hottest Kansas City Startup” and was named a “Company to Watch” by Software Development Times. To learn more, visit http://www.stackify.com and follow @stackify on Twitter.

    Read the article

  • Hey You! Stop Using the Apply Button and Just Click OK! [Geek Rants]

    - by The Geek
    As a computer geek, I often find myself helping people, and watching them change settings on their PC… and they almost always click the Apply button, and then the OK button. Why? Whenever you encounter a dialog box in Windows, there are the standard OK, Cancel, Apply buttons—but you don’t actually have to click the Apply button first. The OK button does the same thing, saves the settings, and then closes the dialog box… saving you an extra click. Don’t believe me? Try it out for yourself. Only the worst possible application won’t behave that way, and you probably don’t want to use that type of application to begin with. The only exception to this rule is a multiple tab dialog box, on a badly written application. Sometimes… your settings on one tab won’t stick unless you click Apply. Note that in this particular case, you can make changes in any one of the tabs, and they will carry through without having to click Apply, because this dialog window is well written. We’re just using the screenshot as an example of a multiple tab setting interface. So now that you know better, you can tell us… do you always use the Apply button first? Have you ever found an instance where it behaves differently? Similar Articles Productive Geek Tips Got Awesome Skills? Why Not Write for How-To Geek?Customize Your Windows Vista Logon ScreenUse Outlook Rules to Prevent "Oh No!" After Sending EmailsGot Awesome Geek Skills? The How-To Geek is Looking for WritersQuick Firefox UI Tweaks TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Recycle ! Find That Elusive Icon with FindIcons Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems

    Read the article

  • ACORD LOMA 2010: Building Insurance Companies in the Clouds

    - by [email protected]
    Chuck Johnston, vice president of global strategy and alliances for Oracle Insurance, participated in a featured speaking session at ACORD LOMA 2010. He provides an update on his discussions with insurers at the show and after his presentation. Every year I always make a point of walking the show floor at the ACORD LOMA technology conference to visit with colleagues and competitors, and try to get a feel for which way the industry will move over the next 12 months. Insurers are looking for substance in cloud (computing), trying to mix business with pleasure (monetizing social networks), and expect differentiation through commodity (Software as a Service). The disconnect at this show is that most vendors are still struggling with creating a clear path from Facebook to customer intimacy, SaaS to core cost savings and clouds to ubiquitous presence. Vendors need to find new ways to help insurers find the real value in these potentially disruptive technologies by understanding the changes coming to the insurance business and how these new technologies impact the new insurance business. Oracle's approach to understanding the evolving insurance industry comes from a discussion with our customers in our Insurance CIO Council, where one of our customers suggested we buy an insurance company to really understand our customers. We have decided to do the next best thing and build our own model of an insurance company, Alamere Insurance, that uses the latest technologies to transform its own business. Alamere will never issue an actual policy, but it does give us a framework to consider the impacts of changes in the insurance landscape and how Oracle technology meets the challenge or needs to evolve to help our customers be successful. In preparing for my talk at the conference using Alamere as my organizing theme, I found myself reading actuarial memoranda on CSO table changes and articles on underwriting theory that really made me think about my customer's problems first and foremost, and then how Oracle technology can provide answers. As much as I prefer techno-thrillers and sci-fi novels to actuarial papers for plane reading, I got very excited about the idea of putting myself back in the customer shoes I haven't worn in a decade, and really looking at how Oracle can power the Adaptive Insurance Enterprise. Talking to customers and industry people after the session, the idea of Alamere seemed to excite people and I got a lot of suggestions as to what lines of business we should model and where we should focus first on technology uptake. One customer said to a colleague that Oracle's attempt to "share their pain" was unique among vendors. More about Alamere, and the Adaptive Insurance Enterprise next time. Chuck Johnston is vice president of global strategy and alliances for Oracle Insurance.

    Read the article

  • Advice on designing a robust program to handle a large library of meta-information & programs

    - by Sam Bryant
    So this might be overly vague, but here it is anyway I'm not really looking for a specific answer, but rather general design principles or direction towards resources that deal with problems like this. It's one of my first large-scale applications, and I would like to do it right. Brief Explanation My basic problem is that I have to write an application that handles a large library of meta-data, can easily modify the meta-data on-the-fly, is robust with respect to crashing, and is very efficient. (Sorta like the design parameters of iTunes, although sometimes iTunes performs more poorly than I would like). If you don't want to read the details, you can skip the rest Long Explanation Specifically I am writing a program that creates a library of image files and meta-data about these files. There is a list of tags that may or may not apply to each image. The program needs to be able to add new images, new tags, assign tags to images, and detect duplicate images, all while operating. The program contains an image Viewer which has tagging operations. The idea is that if a given image A is viewed while the library has tags T1, T2, and T3, then that image will have boolean flags for each of those tags (depending on whether the user tagged that image while it was open in the Viewer). However, prior to being viewed in the Viewer, image A would have no value for tags T1, T2, and T3. Instead it would have a "dirty" flag indicating that it is unknown whether or not A has these tags or not. The program can introduce new tags at any time (which would automatically set all images to "dirty" with respect to this new tag) This program must be fast. It must be easily able to pull up a list of images with or without a certain tag as well as images which are "dirty" with respect to a tag. It has to be crash-safe, in that if it suddenly crashes, all of the tagging information done in that session is not lost (though perhaps it's okay to loose some of it) Finally, it has to work with a lot of images (10,000) I am a fairly experienced programmer, but I have never tried to write a program with such demanding needs and I have never worked with databases. With respect to the meta-data storage, there seem to be a few design choices: Choice 1: Invidual meta-data vs centralized meta-data Individual Meta-Data: have a separate meta-data file for each image. This way, as soon as you change the meta-data for an image, it can be written to the hard disk, without having to rewrite the information for all of the other images. Centralized Meta-Data: Have a single file to hold the meta-data for every file. This would probably require meta-data writes in intervals as opposed to after every change. The benefit here is that you could keep a centralized list of all images with a given tag, ect, making the task of pulling up all images with a given tag very efficient

    Read the article

  • Visual Studio 2010 plus Help Index : have your cake and eat it too

    - by Adrian Hara
    Although the team's intentions might have been good, the new help system in Visual Studio 2010  is a huge step backwards (more like a cannonball-shot-kind-of-leap really) from the one we all know (and love?) in Visual Studio 2008 and 2005 (and heck, even VS6). Its biggest problem, from my point of view, is the total and complete lack of the Help Index feature: you know...the thing where you just go and type in what you're looking for and it filters down the list of results automatically. For me this was the number one productivity feature in the "old" help system, allowing me to find stuff very quickly. Number two is that it's entirely web based and runs, by default, in the browser. So imagine, when you press F1, a new tab opens in your default browser pointing to the help entry. While this is wrong in many ways, it's also extremely annoying, cleaning up tabs in the browser becomes a chore which represents a serious productivity hit. These and many other problems were discussed extensively (and rather vocally) on connect but it seems MS seemed to ignore it and opt to release the new help system anyway, with the promise that more features will be added in a later release. Again, it kind of amazes me that they chose to ship a product with LESS features that the previous one and, what's worse, missing KEY features, just so it's "standards based" and "extensible". To be honest, I couldn't care less about the help system's implementation, I just want it to be usable and I would've thought that by now the software community and especially MS would've learned this lesson. In the end, what kind of saddens me is that MS regards these basic features as ones for the "power help user". I mean, come on! I mean a) it's not like my aunt's using Visual Studio 2010 and she represents the regular user, b) all software developers are, by definition, power users and c) it's a freakin help, not rocket science! As you can tell, I'm pretty pissed. Even more so because I really feel that the VS2010 & co. release really is a great one, with a lot of effort going into the various platforms and frameworks, most (if not all) of them being really REALLY good products. And then they go and screw up the help! How lame is that?!   Anyway, it's not all gloom-and-doom. Luckily there is a desktop app which presents a UI over the new help system that's very close to what was there in VS2008, by Robert Chandler (to which I hereby declare eternal gratitude). It still has some minor issues but I'll take it over the browser version of the help any day. It's free, pretty quick (on my machine ;)) and nicely usable. So, if you hate the new help system (passionately) like I do, download H3Viewer now.

    Read the article

  • Softpedia published some of my open source projects — how to react?

    - by polarblau
    (FYI: I've just moved this question over from Stackoverflow on recommendation.) I just received a few emails, informing me that softpedia.com has added some of my "products" to their "database of scripts, code snippets and web applications". My products are in this case some smaller open source projects, which I have hosted and published on github. Now I'm wondering how to react to this. This site is indirectly making money of my free work through ads on three pages before the actual download. They also seem to "invent" version numbers and I can't find out if they're hosting the latest or all versions of my projects. — I can see how this could lead to problems in the future, since I don't control what's "the latest" everywhere. On the other hand I don't mind some extra publicity. I want as many people as possible to know about the projects, use them, fork them and hopefully improve them. The projects in questions are really fairly small, but this might not be the case in the future for me and/or other people reading this question. I'm sure that this must have happened to others around here. What's your opinion? Should I try to get the downloads removed? Update 1 I've requested the removal and mentioned that I don't feel that Softpedia can provide the right environment for this kind of project. Their team got back to me instantly with a friendly email saying, that they'll remove the links for now: If you are worried that your projects won't be updated, then I must tell you that I have them bookmarked in my RSS reader, so any version changes will be forwarded to me when needed. So I promise I'll keep your script up to date as soon as I see an update in the repository. I have to say, that I appreciate this kind of reaction quite a lot and so I sent them another email, describing in more detail what I'm worried about and what bothers me. I also stated, that I'm aware that my license clearly permits them to host the projects in any case, but that I'd be even happy if they would host the projects as long as they could convince me of a few details and maybe make some small changes to the way the projects are represented. — Let's see where this goes. Update 2 After discussing with their contact and requesting some changes regarding display of version (they had given the possibility to do so) and authorship they put the projects back up on their site. All in all a positive and definitely interesting experience.

    Read the article

  • How should I pitch moving to an agile/iterative development cycle with mandated 3-week deployments?

    - by Wayne M
    I'm part of a small team of four, and I'm the unofficial team lead (I'm lead in all but title, basically). We've largely been a "cowboy" environment, with no architecture or structure and everyone doing their own thing. Previously, our production deployments would be every few months without being on a set schedule, as things were added/removed to the task list of each developer. Recently, our CIO (semi-technical but not really a programmer) decided we will do deployments every three weeks; because of this I instantly thought that adopting an iterative development process (not necessarily full-blown Agile/XP, which would be a huge thing to convince everyone else to do) would go a long way towards helping manage expectations properly so there isn't this far-fetched idea that any new feature will be done in three weeks. IMO the biggest hurdle is that we don't have ANY kind of development approach in place right now (among other things like no CI or automated tests whatsoever). We don't even use Waterfall, we use "Tell Developer X to do a task, expect him to do everything and get it done". Are there any pointers that would help me start to ease us towards an iterative approach and A) Get the other developers on board with it and B) Get management to understand how iterative works? So far my idea involves trying to set up a CI server and get our build process automated (it takes about 10-20 minutes right now to simply build the application to put it on our development server), since pushing tests and/or TDD will be met with a LOT of resistance at this point, and constantly force us to break larger projects into smaller chunks that could be done iteratively in a three-week cycle; my only concern is that, unless I'm misunderstanding, an agile/iterative process may or may not release the software (depending on the project scope you might have "working" software after three weeks, but there isn't enough of it that works to let users make use of it), while I think the expectation here from management is that there will always be something "ready to go" in three weeks, and that disconnect could cause problems. On that note, is there any literature or references that explains the agile/iterative approach from a business standpoint? Everything I've seen only focuses on the developers, how to do it, but nothing seems to describe it from the perspective of actually getting the buy-in from the businesspeople.

    Read the article

  • Boot Problem in Asus EEE PC 1015CX

    - by Sâmrat VikrãmAdityá
    I am a newbie to Linux world, although I have previously worked on Ubuntu 11.04 for daily use (Net Access and simple recordings using Audacity). I am not sure, at what level I stand as a newbie. I bought this Asus Eee PC two days back. The model is Asus 1015CX. See the specs here http://www.flipkart.com/asus-1015cx-blk011w-laptop-2nd-gen-atom-dual-core-1gb-320gb-linux/p/itmd8qu4quzu8srr . I created a live USB to install 12.10. The usb booted fine. When I clicked "Try Ubuntu" option, it showed me a black screen with a cursor blinking. I waited for 15 minutes and had to restart using the power button. On clicking the "Install Ubuntu" button, the install process went seamlessly. [I have a Windows7 installed on one of the partitions]. i installed it alongside previous windows installation. The system was then rebooted for the first time. It showed the GRUB menu and I selected the first option Ubuntu. After showing the splash screen for a second, it began showing various messages on a black screen and then it struck on "Stopping Save kernel state message". I had to force shut the system using power button. Sometimes it just gives a blank screen with a cursor blinking and on pressing power button, some messages stating that acpid is doing something and stopping services pops up and the system shuts down. I tried booting with "nomodeset" and other parameters as directed in solution to previous such problems on forums. Also Ctrl+Alt+F1,F2,F3,F4,F5,F6..F12 is not doing anything for me anywhere. At installation, I checked Login automatically option. On booting into recovery several options comes up. Clicking resume just gives me a blank screen with cursor blinking. on dropping to root shell and remounting filesystem as RW, I am able to supply some command that worked for others. startx -- Several messages comes up with last one stating Fatal error: No screen found sudo service lightdm start -- Gives a blank screen with a cursor blinking lspci | grep VGA -- Shows some Intel Integrated Graphic... something I don't remember I had reconfigured xserver-xorg, lightdm, reinstalled ubuntu-desktop, unity. What should I do..?? Will going back to 11.04 work..?? Or I should leave all hopes of running Ubuntu on my netbook. Please help.

    Read the article

  • Heightmap in Shader not working

    - by CSharkVisibleBasix
    I'm trying to implement GPU based geometry clipmapping and have problems to apply a simple heightmap to my terrain. For the heightmap I use a simple texture with the surface format "single". I've taken the texture from here. To apply it to my terrain, I use the following shader code: texture Heightmap; sampler2D HeightmapSampler = sampler_state { Texture = <Heightmap>; MinFilter = Point; MagFilter = Point; MipFilter = Point; AddressU = Mirror; AddressV = Mirror; }; Vertex Shader: float4 worldPos = mul(float4(pos.x,0.0f,pos.y, 1.0f), worldMatrix); float elevation = tex2Dlod(HeightmapSampler, float4(worldPos.x, worldPos.z,0,0)); worldPos.y = elevation * 128; The complete vertex shader (also containig clipmapping transforms) looks like this: float4x4 View : View; float4x4 Projection : Projection; float3 CameraPos : CameraPosition; float LODscale; //The LOD ring index 0:highest x:lowest float2 Position; //The position of the part in the world texture Heightmap; sampler2D HeightmapSampler = sampler_state { Texture = <Heightmap>; MinFilter = Point; MagFilter = Point; MipFilter = Point; AddressU = Mirror; AddressV = Mirror; }; //Accept only float2 because we extract 3rd dim out of Heightmap float4 wireframe_VS(float2 pos : POSITION) : POSITION{ float4x4 worldMatrix = float4x4( LODscale, 0, 0, 0, 0, LODscale, 0, 0, 0, 0, LODscale, 0, - Position.x * 64 * LODscale + CameraPos.x, 0, Position.y * 64 * LODscale + CameraPos.z, 1); float4 worldPos = mul(float4(pos.x,0.0f,pos.y, 1.0f), worldMatrix); float elevation = tex2Dlod(HeightmapSampler, float4(worldPos.x, worldPos.z,0,0)); worldPos.y = elevation * 128; float4 viewPos = mul(worldPos, View); float4 projPos = mul(viewPos, Projection); return projPos; }

    Read the article

  • Breakfast Keynote, More at Gartner IAM Summit This Week

    - by Tanu Sood
    Gartner Identity and Access Management Conference We look forward to seeing you at the.... Gartner Identity and Access Management Conference Oracle is proud to be a Silver Sponsor of the Gartner Identity and Access Management Summit happening December 3 - 5 in Las Vegas, NV. Don’t miss the opportunity to hear Oracle Senior VP of Identity Management, Amit Jasuja, present Trends in Identity Management at our keynote presentation and breakfast on Tuesday, December 4th at 7:30 a.m. Everyone that attends is entered into a raffle to win a free JAWBONE JAMBOX wireless speaker system. Also, don’t forget to visit the Oracle Booth to mingle with your peers and speak to Oracle experts. Learn how Oracle Identity Management solutions are enabling the Social, Mobile, and Cloud (SoMoClo) environments. Visit Oracle Booth #S15 to: View a demonstration of our latest release - Oracle Identity Management 11g R2 Visit our virtual collateral rack and download useful resources Enter to win a JAWBONE JAMBOX Wireless Speaker System Exhibit Hall Hours Monday, December 3 — 11:45 a.m. – 1:45 p.m. and 6:15 p.m. – 8:15 p.m. Tuesday, December 4 — 11:45 a.m. – 2:45 p.m. To schedule a meeting with Oracle Identity Management executives and experts at Gartner IAM, please email us or speak to your account representative. We look forward to seeing you at the Gartner Identity and Access Management Summit! Visit Oracle at Booth #S15 Gartner IAM SummitDecember 3 - 5, 2012 Caesars Palace Attend our Keynote Breakfast Trends in Identity Management Tuesday, December 4, 2012 7:15 a.m. - 8:00 a.m., Octavius 16 Speakers: Amit Jasuja, Senior Vice President, Identity Management Oracle Ranjan Jain, Enterprise Architect, Cisco As enterprises embrace mobile and social applications, security and audit have moved into the foreground. The way we work and connect with our customers is changing dramatically and this means re-thinking how we secure the interaction and enable the experience. Work is an activity not a place - mobile access enables employees to work from any device anywhere and anytime. Organizations are utilizing "flash teams" - instead of a dedicated group to solve problems, organizations utilize more cross-functional teams. Work is now social - email collaboration will be replaced by dynamic social media style interaction. In this session, we will examine these three secular trends and discuss how organizations can secure the work experience and adapt audit controls to address the "new work order". Stay Connected: For more information, please visit www.oracle.com/identity. Copyright © 2012, Oracle. All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement SEO100120175 Oracle Corporation - Worldwide Headquarters, 500 Oracle Parkway, OPL - E-mail Services, Redwood Shores, CA 94065, United States Your privacy is important to us. You can login to your account to update your e-mail subscriptions or you can opt-out of all Oracle Marketing e-mails at any time.Please note that opting-out of Marketing communications does not affect your receipt of important business communications related to your current relationship with Oracle such as Security Updates, Event Registration notices, Account Management and Support/Service communications.

    Read the article

  • Teacher demands excessive/unjustified use of Design Patterns

    - by SoboLAN
    I study computer science and I have a class called "Programming Techniques". Its purpose is to teach (us) good object oriented design principles. During the semester we have homeworks, programs that we must write to demonstrate what we've learned. The lab assistant demands for each of these homeworks that specific design patterns should be used. For example, the current homework is an application used for processing customer orders. We are demanded to use either "Factory Method" or "Abstract Factory" design patterns for this. It gets even worse: at the end of the semester we must write a program (something more complex) that must use at least one creational pattern, at least one structural pattern and at least one behavioural pattern. Is it normal to demand this ? I mean, forcing us to design our programs in such a way that a specific design pattern makes sense is just beyond what I consider ok. If I'm a car mechanic and have a huge tool box, then I will use a certain tool from that box if and when the situation demands it. Not more, not less. If my design of the application doesn't demand at all the use of "Abstract Factory" (for example), then why should I implement it ? I'm not sure yet if the senior lecturer agrees with what the lab assistant is demanding, but I want to talk to him about it and I need solid arguments to do so. How should I approach this problem with him ? PS: I'm sure there must be a better way to teach us these things. Maybe making us each week read about 3 design patterns and the next week giving us a test with small but specific programming or architectural situations/problems. The goal in that test would be to identify what design patterns would make sense and how they could be implemented. This way, he can see if we understand them. EDIT: These homeworks are not just 100-line programs, they have quite a lot of requirements and are fairly complicated. This is the reason we have about 2 - 3 weeks of deadline for each of them. I agree that practicing this is the best way to learn. But shouldn't smaller programs/applications be used for this ? Something just for demonstrating purposes. Not big programs with lots of requirements/classes/etc.

    Read the article

  • Camera not staying behind model while moving in circle

    - by ChocoMan
    I have a camera behind a model (3rd Person) and I'm having problems KEEPING it behind the model. When I first start my game, you see the back of the model. If the model moves forward, backward or strafe left or right, the camera moves along accordingly. When the model rotates (stationary), the camera rotates accordingly with the model still pointing at the model's back. So far, so good. The problem comes when the player is BOTH moving and rotating at the same time. Take for example a model moving in a circular pattern like running around a track. As the model moves in this motion, the model rotates slightly more with each complete rotation. Eventually, instead of looking at the model's back, eventually you will see the model in a profile view and before you know it, the model's front is facing the camera. And when you stop moving the model, the model stays in that position. So, as long as my model is stationary and rotating in one place, the camera rotates correctly. But as soon as there is any sort movement while rotating, the model is offset by a mysterious increasing amount. How can I keep the camera maintaining the same view no matter how I move AND rotate at the same time? // Rotates model and pitches camera on its own axis public void modelRotMovement(GamePadState pController) { /* For rotating the model left or right. * Camera maintains distance from model * throughout rotation and if model moves * to a new position. */ Yaw = pController.ThumbSticks.Right.X * MathHelper.ToRadians(speedAngleMAX); AddRotation = Quaternion.CreateFromAxisAngle(Vector3.Up, yaw); //AddRotation = Quaternion.CreateFromYawPitchRoll(Yaw, 0, 0); ModelLoad.MRotation *= AddRotation; MOrientation = Matrix.CreateFromQuaternion(ModelLoad.MRotation); Pitch = pController.ThumbSticks.Right.Y * MathHelper.ToRadians(speedAngleMAX); AddPitch = Quaternion.CreateFromAxisAngle(Vector3.Up, pitch); ModelLoad.CRotation *= AddPitch; COrientation = Matrix.CreateFromQuaternion(ModelLoad.CRotation); } // Orbit (yaw) Camera around model public void cameraYaw(float yaw) { Vector3 yawAngle = ModelLoad.CameraPos - ModelLoad.camTarget; Vector3 axisYaw = Vector3.Up; ModelLoad.CameraPos = Vector3.Transform(yawAngle, Matrix.CreateFromAxisAngle(axisYaw, yaw)) + ModelLoad.camTarget; }

    Read the article

  • Silverlight Cream for March 07, 2011 -- #1055

    - by Dave Campbell
    In this Issue: Max Paulousky, Chris Rouw, David Anson, Jesse Liberty, Shawn Wildermuth, Simon Guindon, and Dhananjay Kumar. Above the Fold: Silverlight: "Faster Databinding in WPF and Silverlight using OptimizedObservableCollection" Simon Guindon WP7: "Phoney Tools Updated (WP7 Open Source Library)" Shawn Wildermuth From SilverlightCream.com: Problems With Sharing Windows Phone 7 Applications Within A Large Group Of Beta Testers Max Paulousky has a post up discussing the issues surrounding beta testing a WP7 app with a large group of testers... and how to pull it all off. WP7 Insights #1: Consuming REST APIs within a WP7 app Chris Rouw is beginning a WP7 series based on his recent experience of getting a client's app into the marketplace. This first in his series is on consuming REST APIs ... lots of good code and explanations. Improving Windows Phone 7 application performance is even easier with these LowProfileImageLoader and DeferredLoadListBox updates David Anson has an update to his LowProfileImageLoader and DeferredLoadListBox after issues brought up by readers... so we all win with the great feedback from alert devs. When Isolated Storage Isn’t Enough Jesse Liberty started looking at Jeremy Likness' Sterling with this post in the WP7 From Scratch series. He starts with downloading it from CodePlex ... great way to get into Sterling if you haven't already. Phoney Tools Updated (WP7 Open Source Library) Shawn Wildermuth has the latest drop of his Phoney Tools up... this is the last Alpha. I've added a tag for it as well. He's fixed some things, added others... check out the post and go grab the code. Faster Databinding in WPF and Silverlight using OptimizedObservableCollection Simon Guindon is a blogger I've not been following, but this post on an OptimizedObservableCollection caught my eye. He added an AddRange() to the ObservableCollection to get a speed enhancement when adding items... and a pretty good speed enhancement it is. Reading files asynchronously using WebClient class in Silverlight Dhananjay Kumar is another prolific blogger that I've not been following, so we'll start with his latest... a step-by-step guide to reading an XML file asynchronously. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • The Diabolical Developer: What You Need to Do to Become Awesome

    - by Tori Wieldt
    Wearing sunglasses and quite possibly hungover, Martijn Verburg's evil persona provided key tips on how to be a Diabolical Developer. His presentation at TheServerSide Java Symposium was heavy on the sarcasm and provided lots of laughter. Martijn insisted that developers take their power back and get rid of all the "modern fluff" that distract developers.He provided several key tips to become a Diabolical Developer:*Learn only from yourself. Don't read blogs or books, and don't attend conferences. If you must go on forums, only do it display your superiority, answer as obscurely as possible.*Work aloneBest coding happens when you alone in your room, lock yourself in for days. Make sure you have a gaming machine in with you.*Keep information to yourselfKnowledge is power. Think job security. Never provide documentation. *Make sure only you can read your code.Don't put comments in your code. Name your variables A,B,C....A1,B1, etc.If someone insists you format your in a standard way, change a small section and revert it back as soon as they walk away from your screen. *Stick to what you knowStay on Java 1.3. Don't bother learning abstractions. Write your application in a single file. Stuff as much code into one class as possible, a 30,000-line class is fine. Makes it easier for you to read and maintain.*Use Real ToolsNo "fancy-pancy" IDEs. Real developers only use vi.*Ignore FadsThe cloud is massively overhyped. Mobile is a big fad for young kids.The big, clunky desktop computer (with a real keyboard) will return.Learn new stuff only to pad your resume. Ajax is great for that. *Skip TestingTest-driven development is a complete waste of time. They sent men to the moon without unit tests.Just write your code properly in the first place and you don't need tests.*Compiled = Ship ItUser acceptance testing is an absolute waste of time. *Use a Single ThreadDon't use multithreading. All you need to do is throw more hardware at the problem.*Don't waste time on SEO.If you've written the contract correctly, you are paid for writing code, not attracting users.You don't want a lot of users, they only report problems. *Avoid meetingsFake being sick to avoid meetings. If you are forced into a meeting, play corporate bingo.Once you stand up and shout "bingo" you will kicked out of the meeting. Job done.Follow these tips and you'll be well on your way to being a Diabolical Developer!

    Read the article

  • Tips on ensuring Model Quality

    - by [email protected]
    Given enough data that represents well the domain and models that reflect exactly the decision being optimized, models usually provide good predictions that ensure lift. Nevertheless, sometimes the modeling situation is less than ideal. In this blog entry we explore the problems found in a few such situations and how to avoid them.1 - The Model does not reflect the problem you are trying to solveFor example, you may be trying to solve the problem: "What product should I recommend to this customer" but your model learns on the problem: "Given that a customer has acquired our products, what is the likelihood for each product". In this case the model you built may be too far of a proxy for the problem you are really trying to solve. What you could do in this case is try to build a model based on the result from recommendations of products to customers. If there is not enough data from actual recommendations, you could use a hybrid approach in which you would use the [bad] proxy model until the recommendation model converges.2 - Data is not predictive enoughIf the inputs are not correlated with the output then the models may be unable to provide good predictions. For example, if the input is the phase of the moon and the weather and the output is what car did the customer buy, there may be no correlations found. In this case you should see a low quality model.The solution in this case is to include more relevant inputs.3 - Not enough cases seenIf the data learned does not include enough cases, at least 200 positive examples for each output, then the quality of recommendations may be low. The obvious solution is to include more data records. If this is not possible, then it may be possible to build a model based on the characteristics of the output choices rather than the choices themselves. For example, instead of using products as output, use the product category, price and brand name, and then combine these models.4 - Output leaking into input giving the false impression of good quality modelsIf the input data in the training includes values that have changed or are available only because the output happened, then you will find some strong correlations between the input and the output, but these strong correlations do not reflect the data that you will have available at decision (prediction) time. For example, if you are building a model to predict whether a web site visitor will succeed in registering, and the input includes the variable DaysSinceRegistration, and you learn when this variable has already been set, you will probably see a big correlation between having a Zero (or one) in this variable and the fact that registration was successful.The solution is to remove these variables from the input or make sure they reflect the value as of the time of decision and not after the result is known. 

    Read the article

  • Version control and data provenance in charts, slides, and marketing materials that derive from code ouput

    - by EMS
    I develop as part of a small team that mostly does research and statistics stuff. But from the output of our code, other teams often create promotional materials, slides, presentations, etc. We run into a big problem because the marketing team (non-programmers) tend to use Excel, Adobe products, or other tools to carry out their work, and just want easy-to-use data formats from us. This leads to data provenance problems. We see email chains with attachments from 6 months ago and someone is saying "Hey, who generated this data. Can you generate more of it with the recent 6 months of results added in?" I want to help the other teams effectively use version control (my team uses it reasonably well for the code, but every other team classically comes up with many excuses to avoid it). For version controlling a software project where the participants are coders, I have some reasonable understanding of best practices and what to do. But for getting a team of marketing professionals to version control marketing materials and associate metadata about the software used to generate the data for the charts, I'm a bit at a loss. Some of the goals I'd like to achieve: Data that supported a material should never be associated with a person. As in, it should never be the case that someone says "Hey Person XYZ, I see you sent me this data as an attachment 6 months ago, can you update it for me?" Rather, data should be associated with the code and code-version of any code that was used to get it, and perhaps a team of many people who may maintain that code. Then references for data updates are about executing a specific piece of code, with a known version number. I'd like this to be a process that works easily with the tech that the marketing team already uses (e.g. Excel files, Adobe file, whatever). I don't want to burden them with needing to learn a bunch of new stuff just to use version control. They are capable folks, so learning something is fine. Ideally they could use our existing version control framework, but there are some issues around that. I think knowing some general best practices will be enough though, and I can handle patching that into the way our stuff works now. Are there any goals I am failing to think about? What are the time-tested ways to do something like this?

    Read the article

  • Design pattern for an ASP.NET project using Entity Framework

    - by MPelletier
    I'm building a website in ASP.NET (Web Forms) on top of an engine with business rules (which basically resides in a separate DLL), connected to a database mapped with Entity Framework (in a 3rd, separate project). I designed the Engine first, which has an Entity Framework context, and then went on to work on the website, which presents various reports. I believe I made a terrible design mistake in that the website has its own context (which sounded normal at first). I present this mockup of the engine and a report page's code behind: Engine (in separate DLL): public Engine { DatabaseEntities _engineContext; public Engine() { // Connection string and procedure managed in DB layer _engineContext = DatabaseEntities.Connect(); } public ChangeSomeEntity(SomeEntity someEntity, int newValue) { //Suppose there's some validation too, non trivial stuff SomeEntity.Value = newValue; _engineContext.SaveChanges(); } } And report: public partial class MyReport : Page { Engine _engine; DatabaseEntities _webpageContext; public MyReport() { _engine = new Engine(); _databaseContext = DatabaseEntities.Connect(); } public void ChangeSomeEntityButton_Clicked(object sender, EventArgs e) { SomeEntity someEntity; //Wrong way: //Get the entity from the webpage context someEntity = _webpageContext.SomeEntities.Single(s => s.Id == SomeEntityId); //Send the entity from _webpageContext to the engine _engine.ChangeSomeEntity(someEntity, SomeEntityNewValue); // <- oops, conflict of context //Right(?) way: //Get the entity from the engine context someEntity = _engine.GetSomeEntity(SomeEntityId); //undefined above //Send the entity from the engine's context to the engine _engine.ChangeSomeEntity(someEntity, SomeEntityNewValue); // <- oops, conflict of context } } Because the webpage has its own context, giving the Engine an entity from a different context will cause an error. I happen to know not to do that, to only give the Engine entities from its own context. But this is a very error-prone design. I see the error of my ways now. I just don't know the right path. I'm considering: Creating the connection in the Engine and passing it off to the webpage. Always instantiate an Engine, make its context accessible from a property, sharing it. Possible problems: other conflicts? Slow? Concurrency issues if I want to expand to AJAX? Creating the connection from the webpage and passing it off to the Engine (I believe that's dependency injection?) Only talking through ID's. Creates redundancy, not always practical, sounds archaic. But at the same time, I already recuperate stuff from the page as ID's that I need to fetch anyways. What would be best compromise here for safety, ease-of-use and understanding, stability, and speed?

    Read the article

  • Solving Inbound Refinery PDF Conversion Issues, Part 1

    - by Kevin Smith
    Working with Inbound Refinery (IBR)  and PDF Conversion can be very frustrating. When everything is working smoothly you kind of forgot it is even there. Documents are cheeked into WebCenter Content (WCC), sent to IBR for conversion, converted to PDF, returned to WCC, and viola your Office documents have a nice PDF rendition available for viewing. Then a user checks in a bunch of password protected Word files, the conversions fail, your IBR queue starts backing up, users start calling asking why their document have not been released yet, and your spend a frustrating afternoon trying to recover and get things back running properly again. Password protected documents are one cause of PDF conversion failures, and I will cover those in a future blog post, but there are many other problems that can cause conversions to fail, especially when working with the WinNativeConverter and using the native applications, e.g. Word, to convert a document to PDF. There are other conversion options like PDFExportConverter which uses Oracle OutsideIn to convert documents directly to PDF without the need for the native applications. However, to get the best fidelity to the original document the native applications must be used. Many customers have tried PDFExportConverter, but have stayed with the native applications for conversion since the conversion results from PDFExportConverter were not as good as when the native applications are used. One problem I ran into recently, that at least has a easy solution, are Word documents that display a Show Repairs dialog when the document is opened. If you open the problem document yourself you will see this dialog. This will cause the conversion to time out. Any time the native application displays a dialog that requires user input the conversion will time out. The solution is to set add a setting for BulletProofOnCorruption to the registry for the user running Word on the IBR server. See this support note from Microsoft for details. The support note says to set the registry key under HKEY_CURRENT_USER, but since we are running IBR as a service the correct location is under HKEY_USERS\.DEFAULT. Also since in our environment we were using Office 2007, the correct registry key to use was: HKEY_USERS\.DEFAULT\Software\Microsoft\Office\11.0\Word\Options Once you have done this restart the IBR managed server and resubmit your problem document. It should now be converted successfully. For more details on IBR see the Oracle® WebCenter Content Administrator's Guide for Conversion.

    Read the article

  • Ubuntu 12.04 won't load - hangs at Busybox v1.18.5 / initramfs

    - by Marty
    I want to start by saying that I am very new with Linux (about 1 month using it). I have had no problems up until now. I am running Ubuntu 12.04 from a Toshiba laptop with 250 GB hard drive and 3 GB of ram. Everything worked fine yesterday. The only changes I made was was that I downloaded Banshee to try as a replacement for Rhythmbox and did a few recommended updates. This morning I tried to boot and it took a long time and I finally got this error: mount: mounting /dev/disk/by-uuid/02bc41cc-1e21-4700-a179-be2805a658c4 on /root failed: Invalid argument mount: mounting /dev on /root/dev failed: No such file or directory mount: mounting /sys on /root/sys failed: No such file or directory mount: mounting /proc on /root/proc failed: No such file or directory Target filesystem doesn't have requested /sbin/init. No init found. Try passing init= bootarg. BusyBox v1.18. (Ubuntu 1:1.18.5-1ubuntu4) built-in shell (ash) Enter 'help' for a list of built-in commands (initramfs) I'm not sure what to do beyond this point. I have read around on here and haven't found the help I need. I did try to boot it from the Live CD. I can boot up to the Try Ubuntu/Install Ubuntu screen. When I go through the Try Ubuntu selection I can't access my hard disk. When I clicked on it I got this error: Unable to mount 247 GB Filesystem Error mounting: mount: wrong fs type, bad option, bad superblock on /dev/sda/1, missing codepage or helper program, or other error. In some cases useful info is found in syslog - try dmesg|tail or so. I tried dmesg|tail and saw a string of values but nothing that looked helpful. I have also tried to boot from the GRUB screen as recovery mode and previous Linux version but they didn't work either. I tried to load Windows Recovery Environment (loader) (on /dev/sdc3) and got this message: error: no such device: 268057B1805785E9 error: hd1 cannot get C/H/S values I had saw somewhere that I could fix this with the Live CD but my knowledge isn't good enough to try. I tried something with Gpart that I had read, but the system told me that I didn't have Gpart. Could someone please explain to me what I need to do and/or haven't tried yet. Thanks so much!

    Read the article

  • Finding the Twins when Implementing Catmull-Clark subdivision using Half-Edge mesh [migrated]

    - by Ailurus
    Note: The description became a little longer than expected. Do you know a readable implementation of this algorithm using this mesh? Please let me know! I'm trying to implement Catmull-Clark subdivision using Matlab (because later on the results have to be compared with some other stuff already implemented in Matlab). First try was with a Vertex-Face mesh, the algorithm works but it is of course not very efficient (since you need neighbouring information for edges and faces). Therefore, I'm now using a Half-Edge mesh (info), see also the paper of Lutz Kettner. Wikipedia link to the idea behind Catmull-Clark SDV: Wiki. My problem lies in finding the Twin HalfEdges, I'm just not sure how to do this. Below I'm describing my thoughts on the implementation, trying to keep it concise. Half-Edge mesh (using indices to Vertices/HalfEdges/Faces): Vertex (x,y,z,Outgoing_HalfEdge) HalfEdge (HeadVertex (or TailVertex, which one should I use), Next, Face, Twin). Face (HalfEdge) To keep it simple for now, assume that every face is a quadrilateral. The actual mesh is a list of Vertices, HalfEdges and Faces. The new mesh will consist of NewVertices, NewHalfEdges and NewFaces, like this (note: Number_... is the number of ...): NumberNewVertices: Number_Faces + Number_HalfEdges/2 + Number_Vertices NumberNewHalfEdges: 4 * 4 * NumberFaces NumberNewfaces: 4 * NumberFaces Catmull-Clark: Find the FacePoint (centroid) of each Face: --> Just average the x,y,z values of the vertices, save as a NewVertex. Find the EdgePoint of each HalfEdge: --> To prevent duplicates (each HalfEdge has a Twin which would result in the same HalfEdge) --> Only calculate EdgePoints of the HalfEdge which has the lowest index of the Pair. Update old Vertices Ok, now all the new Vertices are calculated (however, their Outgoing_HalfEdge is still unknown). Next step to save the new HalfEdges and Faces. This is the part causing me problems! Loop through each old Face, there are 4 new Faces to be created (because of the quadrilateral assumption) First create the 4 new HalfEdges per New Face, starting at the FacePoint to the Edgepoint Next a new HalfEdge from the EdgePoint to an Updated Vertex Another new one from the Updated Vertex to the next EdgePoint Finally the fourth new HalfEdge from the EdgePoint back to the FacePoint. The HeadVertex of each new HalfEdge is known, the Next HalfEdge too. The Face is also known (since it is the new face you're creating!). Only the Twin HalfEdge is unknown, how should I know this? By the way, while looping through the Vertices of the new Face, assign the Outgoing_HalfEdge to the Vertices. This is probably the place to find out which HalfEdge is the Twin. Finally, after the 4 new HalfEdges are created, save the Face with the HalfVertex index the last newly created HalfVertex. I hope this is clear, if needed I can post my (obviously not-yet-finished) Matlab code.

    Read the article

  • Nagging As A Strategy For Better Linking: -z guidance

    - by user9154181
    The link-editor (ld) in Solaris 11 has a new feature that we call guidance that is intended to help you build better objects. The basic idea behind guidance is that if (and only if) you request it, the link-editor will issue messages suggesting better options and other changes you might make to your ld command to get better results. You can choose to take the advice, or you can disable specific types of guidance while acting on others. In some ways, this works like an experienced friend leaning over your shoulder and giving you advice — you're free to take it or leave it as you see fit, but you get nudged to do a better job than you might have otherwise. We use guidance to build the core Solaris OS, and it has proven to be useful, both in improving our objects, and in making sure that regressions don't creep back in later. In this article, I'm going to describe the evolution in thinking and design that led to the implementation of the -z guidance option, as well as give a brief description of how it works. The guidance feature issues non-fatal warnings. However, experience shows that once developers get used to ignoring warnings, it is inevitable that real problems will be lost in the noise and ignored or missed. This is why we have a zero tolerance policy against build noise in the core Solaris OS. In order to get maximum benefit from -z guidance while maintaining this policy, I added the -z fatal-warnings option at the same time. Much of the material presented here is adapted from the arc case: PSARC 2010/312 Link-editor guidance The History Of Unfortunate Link-Editor Defaults The Solaris link-editor is one of the oldest Unix commands. It stands to reason that this would be true — in order to write an operating system, you need the ability to compile and link code. The original link-editor (ld) had defaults that made sense at the time. As new features were needed, command line option switches were added to let the user use them, while maintaining backward compatibility for those who didn't. Backward compatibility is always a concern in system design, but is particularly important in the case of the tool chain (compilers, linker, and related tools), since it is a basic building block for the entire system. Over the years, applications have grown in size and complexity. Important concepts like dynamic linking that didn't exist in the original Unix system were invented. Object file formats changed. In the case of System V Release 4 Unix derivatives like Solaris, the ELF (Extensible Linking Format) was adopted. Since then, the ELF system has evolved to provide tools needed to manage today's larger and more complex environments. Features such as lazy loading, and direct bindings have been added. In an ideal world, many of these options would be defaults, with rarely used options that allow the user to turn them off. However, the reality is exactly the reverse: For backward compatibility, these features are all options that must be explicitly turned on by the user. This has led to a situation in which most applications do not take advantage of the many improvements that have been made in linking over the last 20 years. If their code seems to link and run without issue, what motivation does a developer have to read a complex manpage, absorb the information provided, choose the features that matter for their application, and apply them? Experience shows that only the most motivated and diligent programmers will make that effort. We know that most programs would be improved if we could just get you to use the various whizzy features that we provide, but the defaults conspire against us. We have long wanted to do something to make it easier for our users to use the linkers more effectively. There have been many conversations over the years regarding this issue, and how to address it. They always break down along the following lines: Change ld Defaults Since the world would be a better place the newer ld features were the defaults, why not change things to make it so? This idea is simple, elegant, and impossible. Doing so would break a large number of existing applications, including those of ISVs, big customers, and a plethora of existing open source packages. In each case, the owner of that code may choose to follow our lead and fix their code, or they may view it as an invitation to reconsider their commitment to our platform. Backward compatibility, and our installed base of working software, is one of our greatest assets, and not something to be lightly put at risk. Breaking backward compatibility at this level of the system is likely to do more harm than good. But, it sure is tempting. New Link-Editor One might create a new linker command, not called 'ld', leaving the old command as it is. The new one could use the same code as ld, but would offer only modern options, with the proper defaults for features such as direct binding. The resulting link-editor would be a pleasure to use. However, the approach is doomed to niche status. There is a vast pile of exiting code in the world built around the existing ld command, that reaches back to the 1970's. ld use is embedded in large and unknown numbers of makefiles, and is used by name by compilers that execute it. A Unix link-editor that is not named ld will not find a majority audience no matter how good it might be. Finally, a new linker command will eventually cease to be new, and will accumulate its own burden of backward compatibility issues. An Option To Make ld Do The Right Things Automatically This line of reasoning is best summarized by a CR filed in 2005, entitled 6239804 make it easier for ld(1) to do what's best The idea is to have a '-z best' option that unchains ld from its backward compatibility commitment, and allows it to turn on the "best" set of features, as determined by the authors of ld. The specific set of features enabled by -z best would be subject to change over time, as requirements change. This idea is more realistic than the other two, but was never implemented because it has some important issues that we could never answer to our satisfaction: The -z best proposal assumes that the user can turn it on, and trust it to select good options without the user needing to be aware of the options being applied. This is a fallacy. Features such as direct bindings require the user to do some analysis to ensure that the resulting program will still operate properly. A user who is willing to do the work to verify that what -z best does will be OK for their application is capable of turning on those features directly, and therefore gains little added benefit from -z best. The intent is that when a user opts into -z best, that they understand that z best is subject to sometimes incompatible evolution. Experience teaches us that this won't work. People will use this feature, the meaning of -z best will change, code that used to build will fail, and then there will be complaints and demands to retract the change. When (not if) this occurs, we will of course defend our actions, and point at the disclaimer. We'll win some of those debates, and lose others. Ultimately, we'll end up with -z best2 (-z better), or other compromises, and our goal of simplifying the world will have failed. The -z best idea rolls up a set of features that may or may not be related to each other into a unit that must be taken wholesale, or not at all. It could be that only a subset of what it does is compatible with a given application, in which case the user is expected to abandon -z best and instead set the options that apply to their application directly. In doing so, they lose one of the benefits of -z best, that if you use it, future versions of ld may choose a different set of options, and automatically improve the object through the act of rebuilding it. I drew two conclusions from the above history: For a link-editor, backward compatibility is vital. If a given command line linked your application 10 years ago, you have every reason to expect that it will link today, assuming that the libraries you're linking against are still available and compatible with their previous interfaces. For an application of any size or complexity, there is no substitute for the work involved in examining the code and determining which linker options apply and which do not. These options are largely orthogonal to each other, and it can be reasonable not to use any or all of them, depending on the situation, even in modern applications. It is a mistake to tie them together. The idea for -z guidance came from consideration of these points. By decoupling the advice from the act of taking the advice, we can retain the good aspects of -z best while avoiding its pitfalls: -z guidance gives advice, but the decision to take that advice remains with the user who must evaluate its merit and make a decision to take it or not. As such, we are free to change the specific guidance given in future releases of ld, without breaking existing applications. The only fallout from this will be some new warnings in the build output, which can be ignored or dealt with at the user's convenience. It does not couple the various features given into a single "take it or leave it" option, meaning that there will never be a need to offer "-zguidance2", or other such variants as things change over time. Guidance has the potential to be our final word on this subject. The user is given the flexibility to disable specific categories of guidance without losing the benefit of others, including those that might be added to future versions of the system. Although -z fatal-warnings stands on its own as a useful feature, it is of particular interest in combination with -z guidance. Used together, the guidance turns from advice to hard requirement: The user must either make the suggested change, or explicitly reject the advice by specifying a guidance exception token, in order to get a build. This is valuable in environments with high coding standards. ld Command Line Options The guidance effort resulted in new link-editor options for guidance and for turning warnings into fatal errors. Before I reproduce that text here, I'd like to highlight the strategic decisions embedded in the guidance feature: In order to get guidance, you have to opt in. We hope you will opt in, and believe you'll get better objects if you do, but our default mode of operation will continue as it always has, with full backward compatibility, and without judgement. Guidance suggestions always offers specific advice, and not vague generalizations. You can disable some guidance without turning off the entire feature. When you get guidance warnings, you can choose to take the advice, or you can specify a keyword to disable guidance for just that category. This allows you to get guidance for things that are useful to you, without being bothered about things that you've already considered and dismissed. As the world changes, we will add new guidance to steer you in the right direction. All such new guidance will come with a keyword that let's you turn it off. In order to facilitate building your code on different versions of Solaris, we quietly ignore any guidance keywords we don't recognize, assuming that they are intended for newer versions of the link-editor. If you want to see what guidance tokens ld does and does not recognize on your system, you can use the ld debugging feature as follows: % ld -Dargs -z guidance=foo,nodefs debug: debug: Solaris Linkers: 5.11-1.2275 debug: debug: arg[1] option=-D: option-argument: args debug: arg[2] option=-z: option-argument: guidance=foo,nodefs debug: warning: unrecognized -z guidance item: foo The -z fatal-warning option is straightforward, and generally useful in environments with strict coding standards. Note that the GNU ld already had this feature, and we accept their option names as synonyms: -z fatal-warnings | nofatal-warnings --fatal-warnings | --no-fatal-warnings The -z fatal-warnings and the --fatal-warnings option cause the link-editor to treat warnings as fatal errors. The -z nofatal-warnings and the --no-fatal-warnings option cause the link-editor to treat warnings as non-fatal. This is the default behavior. The -z guidance option is defined as follows: -z guidance[=item1,item2,...] Provide guidance messages to suggest ld options that can improve the quality of the resulting object, or which are otherwise considered to be beneficial. The specific guidance offered is subject to change over time as the system evolves. Obsolete guidance offered by older versions of ld may be dropped in new versions. Similarly, new guidance may be added to new versions of ld. Guidance therefore always represents current best practices. It is possible to enable guidance, while preventing specific guidance messages, by providing a list of item tokens, representing the class of guidance to be suppressed. In this way, unwanted advice can be suppressed without losing the benefit of other guidance. Unrecognized item tokens are quietly ignored by ld, allowing a given ld command line to be executed on a variety of older or newer versions of Solaris. The guidance offered by the current version of ld, and the item tokens used to disable these messages, are as follows. Specify Required Dependencies Dynamic executables and shared objects should explicitly define all of the dependencies they require. Guidance recommends the use of the -z defs option, should any symbol references remain unsatisfied when building dynamic objects. This guidance can be disabled with -z guidance=nodefs. Do Not Specify Non-Required Dependencies Dynamic executables and shared objects should not define any dependencies that do not satisfy the symbol references made by the dynamic object. Guidance recommends that unused dependencies be removed. This guidance can be disabled with -z guidance=nounused. Lazy Loading Dependencies should be identified for lazy loading. Guidance recommends the use of the -z lazyload option should any dependency be processed before either a -z lazyload or -z nolazyload option is encountered. This guidance can be disabled with -z guidance=nolazyload. Direct Bindings Dependencies should be referenced with direct bindings. Guidance recommends the use of the -B direct, or -z direct options should any dependency be processed before either of these options, or the -z nodirect option is encountered. This guidance can be disabled with -z guidance=nodirect. Pure Text Segment Dynamic objects should not contain relocations to non-writable, allocable sections. Guidance recommends compiling objects with Position Independent Code (PIC) should any relocations against the text segment remain, and neither the -z textwarn or -z textoff options are encountered. This guidance can be disabled with -z guidance=notext. Mapfile Syntax All mapfiles should use the version 2 mapfile syntax. Guidance recommends the use of the version 2 syntax should any mapfiles be encountered that use the version 1 syntax. This guidance can be disabled with -z guidance=nomapfile. Library Search Path Inappropriate dependencies that are encountered by ld are quietly ignored. For example, a 32-bit dependency that is encountered when generating a 64-bit object is ignored. These dependencies can result from incorrect search path settings, such as supplying an incorrect -L option. Although benign, this dependency processing is wasteful, and might hide a build problem that should be solved. Guidance recommends the removal of any inappropriate dependencies. This guidance can be disabled with -z guidance=nolibpath. In addition, -z guidance=noall can be used to entirely disable the guidance feature. See Chapter 7, Link-Editor Quick Reference, in the Linker and Libraries Guide for more information on guidance and advice for building better objects. Example The following example demonstrates how the guidance feature is intended to work. We will build a shared object that has a variety of shortcomings: Does not specify all it's dependencies Specifies dependencies it does not use Does not use direct bindings Uses a version 1 mapfile Contains relocations to the readonly allocable text (not PIC) This scenario is sadly very common — many shared objects have one or more of these issues. % cat hello.c #include <stdio.h> #include <unistd.h> void hello(void) { printf("hello user %d\n", getpid()); } % cat mapfile.v1 # This version 1 mapfile will trigger a guidance message % cc hello.c -o hello.so -G -M mapfile.v1 -lelf As you can see, the operation completes without error, resulting in a usable object. However, turning on guidance reveals a number of things that could be better: % cc hello.c -o hello.so -G -M mapfile.v1 -lelf -zguidance ld: guidance: version 2 mapfile syntax recommended: mapfile.v1 ld: guidance: -z lazyload option recommended before first dependency ld: guidance: -B direct or -z direct option recommended before first dependency Undefined first referenced symbol in file getpid hello.o (symbol belongs to implicit dependency /lib/libc.so.1) printf hello.o (symbol belongs to implicit dependency /lib/libc.so.1) ld: warning: symbol referencing errors ld: guidance: -z defs option recommended for shared objects ld: guidance: removal of unused dependency recommended: libelf.so.1 warning: Text relocation remains referenced against symbol offset in file .rodata1 (section) 0xa hello.o getpid 0x4 hello.o printf 0xf hello.o ld: guidance: position independent (PIC) code recommended for shared objects ld: guidance: see ld(1) -z guidance for more information Given the explicit advice in the above guidance messages, it is relatively easy to modify the example to do the right things: % cat mapfile.v2 # This version 2 mapfile will not trigger a guidance message $mapfile_version 2 % cc hello.c -o hello.so -Kpic -G -Bdirect -M mapfile.v2 -lc -zguidance There are situations in which the guidance does not fit the object being built. For instance, you want to build an object without direct bindings: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance ld: guidance: -B direct or -z direct option recommended before first dependency ld: guidance: see ld(1) -z guidance for more information It is easy to disable that specific guidance warning without losing the overall benefit from allowing the remainder of the guidance feature to operate: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance=nodirect Conclusions The linking guidelines enforced by the ld guidance feature correspond rather directly to our standards for building the core Solaris OS. I'm sure that comes as no surprise. It only makes sense that we would want to build our own product as well as we know how. Solaris is usually the first significant test for any new linker feature. We now enable guidance by default for all builds, and the effect has been very positive. Guidance helps us find suboptimal objects more quickly. Programmers get concrete advice for what to change instead of vague generalities. Even in the cases where we override the guidance, the makefile rules to do so serve as documentation of the fact. Deciding to use guidance is likely to cause some up front work for most code, as it forces you to consider using new features such as direct bindings. Such investigation is worthwhile, but does not come for free. However, the guidance suggestions offer a structured and straightforward way to tackle modernizing your objects, and once that work is done, for keeping them that way. The investment is often worth it, and will replay you in terms of better performance and fewer problems. I hope that you find guidance to be as useful as we have.

    Read the article

  • Are there deprecated practices for multithread and multiprocessor programming that I should no longer use?

    - by DeveloperDon
    In the early days of FORTRAN and BASIC, essentially all programs were written with GOTO statements. The result was spaghetti code and the solution was structured programming. Similarly, pointers can have difficult to control characteristics in our programs. C++ started with plenty of pointers, but use of references are recommended. Libraries like STL can reduce some of our dependency. There are also idioms to create smart pointers that have better characteristics, and some version of C++ permit references and managed code. Programming practices like inheritance and polymorphism use a lot of pointers behind the scenes (just as for, while, do structured programming generates code filled with branch instructions). Languages like Java eliminate pointers and use garbage collection to manage dynamically allocated data instead of depending on programmers to match all their new and delete statements. In my reading, I have seen examples of multi-process and multi-thread programming that don't seem to use semaphores. Do they use the same thing with different names or do they have new ways of structuring protection of resources from concurrent use? For example, a specific example of a system for multithread programming with multicore processors is OpenMP. It represents a critical region as follows, without the use of semaphores, which seem not to be included in the environment. th_id = omp_get_thread_num(); #pragma omp critical { cout << "Hello World from thread " << th_id << '\n'; } This example is an excerpt from: http://en.wikipedia.org/wiki/OpenMP Alternatively, similar protection of threads from each other using semaphores with functions wait() and signal() might look like this: wait(sem); th_id = get_thread_num(); cout << "Hello World from thread " << th_id << '\n'; signal(sem); In this example, things are pretty simple, and just a simple review is enough to show the wait() and signal() calls are matched and even with a lot of concurrency, thread safety is provided. But other algorithms are more complicated and use multiple semaphores (both binary and counting) spread across multiple functions with complex conditions that can be called by many threads. The consequences of creating deadlock or failing to make things thread safe can be hard to manage. Do these systems like OpenMP eliminate the problems with semaphores? Do they move the problem somewhere else? How do I transform my favorite semaphore using algorithm to not use semaphores anymore?

    Read the article

  • Make Firefox Show Google Results for Default Address Bar Searches

    - by The Geek
    Have you ever typed something incorrectly into the Firefox address bar, and then had it take you to a page you weren’t expecting? The reason is because Firefox uses Google’s “I’m Feeling Lucky” search, but you can change it. Scratching your head? Let’s take a quick run through what we’re talking about… Normally, if you typed in something like just “howtogeek” in the address bar, and then hit enter… you’ll be taken directly to the How-To Geek site. But how? Very simple! It’s the same place you would have been taken to if you typed “howtogeek” into Google, and then clicked the “I’m Feeling Lucky” button, which takes you to the first result. This is what Firefox does behind the scenes when you put something into the address bar that isn’t a URL. But what if you’d rather head to the search results page instead? Luckily, all you have to do is tweak an about:config parameter in Firefox. Just head into about:config in the address bar, and then filter for keyword.url like so: Double-click on the entry in the list, and then delete the &gfns=1 from the value. That’s the part of the URL that triggers Google to redirect to the first result. And now, the next time you type something into the address bar, either on purpose or because you typo’d it, you’ll be taken to the results page instead: About:Config tweaking is lots of fun. Similar Articles Productive Geek Tips Make Firefox Quick Search Use Google’s Beta Search KeysMake Firefox Built-In Search Box Use Google’s Experimental Search KeysCombine Wolfram Alpha & Google Search Results in FirefoxHow To Run 4 Different Google Searches at Once In the Same TabChange Default Feed Reader in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems Icelandic Volcano Webcams Open Multiple Links At One Go

    Read the article

< Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >