Search Results

Search found 15925 results on 637 pages for 'os walk'.

Page 325/637 | < Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >

  • Link dos meus melhores artigos

    - by renatohaddad
    Olá pessoal, resolvi agregar os links dos meus melhores artigos.Engenharia Reversa no Entity Framework 5 http://msdn.microsoft.com/pt-br/library/jj856239.aspxDados Geográficos no Entity Framework 5 e SQL Server 2012 http://msdn.microsoft.com/pt-br/library/jj900151.aspxManutenção de Dados no Entity Framework 4 http://msdn.microsoft.com/pt-br/library/jj128160.aspxPOCO no Entity Framework 4 http://msdn.microsoft.com/pt-br/library/ff978717.aspxA importância e o uso do Data Annotation http://msdn.microsoft.com/pt-br/library/jj129537.aspxUso de parâmetros Opcionais no Visual C# 4 http://msdn.microsoft.com/pt-br/library/jj218324.aspxBing Maps no Windows Phone 7.5  http://msdn.microsoft.com/pt-br/library/hh972467.aspxWindows Phone 7 - dados de OData  http://msdn.microsoft.com/pt-br/library/hh972465.aspxReconhecimento de Voz no Windows Phone 8 http://msdn.microsoft.com/pt-br/library/jj856240.aspx Saibam que quase todos estes temas abordados nos artigos eu tenho em detalhes nos treinamentos via download em http://www.renatohaddad.com/loja.Bons estudos e sucesso nos projetos!Renatão

    Read the article

  • PXE booting on old computer

    - by kosciak
    So my computer is not working - the disk is totally cleared, I've deleted GRUB and PLOP which I used to install new system, because CD-Rom is broken and BIOS is old (whole computer is old, it's Sony Vaio PCG-GR250) so it won't allow me to do it via USB and I got no floppy drive :) the only way is to PXE boot PLOP and install Linux from USB after PLOP has been opened. (I'm not a specialist but that's how I see it) I'm using Mac OS X 10.9 and I followed number of tutorials how to set up TFTP and DHCP server and I got PLOP here, but when I boot up with PXE it says that it found DHCP but TFTP timed out. Any help or alternative way of rescuing old laptop? Thanks in advance!

    Read the article

  • Codeblocks gui problems [closed]

    - by foobar
    I'm having problems with Code::Blocks 10.05 my os specs: Ubuntu 11.04 - fresh install gtk theme: Ambiance Unity The actual problem (don't mind the code nor the errors) the most visible one - stripes As I scroll down the code, the horizontal stripes start to show up. I think it's some problem with the screen updating, because when I force it to update (for example by selecting the text or invocating the contextual menu by pressing the right mouse button), the stripes disappear. the second one - font colour bug I believe this is the Ambiance's bug because it doesn't happen in other themes. You can see it in the left panel where it says "Workspace" and in the panel at the bottom in the Build messages tab: the selected line makes the text barely readable. Is there any fix to these bugs? Thanks.

    Read the article

  • Upgrades in 5 Easy Pieces

    - by Anne R.
    Even though there are a few select tasks that I have to do once or twice a year, I can’t remember how to do them! Or where to find the bits and pieces to complete the task. So I love it when someone consolidates everything under one spot. That’s what the CRM On Demand team has done with the upgrade information. Specifically, they have: Provided a “one-stop” area for managing upgrades at your company. Broken down the upgrade process into 5 (yes, 5) steps. Explained when and how to perform each step with dates specific to your pod. Included details about each step, visible by expanding the step. Translated the steps into 11 languages. Added a list of release-specific resources with links from the page. Now, just head for the Training and Support portal, click the Release Info tab, and walk through the “5 Essential Steps to a Successful Upgrade.” Before you continue, though, select your language from the drop-down list on the Release Info page. CRM On Demand now has the upgrade steps translated into 11 languages. On the Step page, you can expand each section in sequence and follow the more detailed instructions that appear. This will ensure that you’ve covered all your bases for each upgrade. Here’s a shortened version of the information that you’ll find: 1. Verify your Primary Contact Information. Have you checked your primary contact information to make sure you’re being notified of all upgrade information? Or do you want more users to receive upgrade announcements? This section provides you with the navigation path to do that in CRM On Demand. 2. Review your Key Upgrade Dates. If you expand this step, a nice table appears with your critical dates for the various milestones. IMPORTANT: When your CRM On Demand pod has been officially added to the upgrade schedule, closer to the release date itself, this table will display your specific timetable. 3. Migrate your Customizations from the Staging Environment before the Snapshot Date. Oracle refreshes the Staging data with a copy of your Production data made on the Production Snapshot Date. So this section lists considerations relevant to this step. It also reminds you of the 2-week period when you should not be making any changes in your Staging environment.   4. Conduct your Upgrade Validation on the Staging Environment. When the Customer Validation Testing period begins, you need to log in to your Staging Environment to validate that your key business processes and customizations continue to behave as expected. If your company utilizes Web Services, Web Links, Web Applets or Workflow, focus on testing these first. You generally have about two weeks for testing. If you run into problems during this time, follow the instructions shown in this section for logging a service request. It describes exactly how to fill out the fields in the SR for the fastest resolution. 5. Conduct "White Glove" Testing in your Upgraded Production Environment. Before users start using the upgrade, you should access a few tabs and reports. Doing this actually warms up the cache so that frequently used pages and reports will come up at normal speed on Monday morning, when users log in to the upgraded system. Resources listed under this step help you in further preparing for the upgrade. Now there’s also a new Documentation section on the right with links to these release-specific resources.   Very nice, I commented, when discussing these improvements with the “responsible party.” She confirmed that, yes, they tried to consolidate the upgrade information, translate it for better communication, simplify it into 5 easy pieces, and drive admins responsible for handling upgrades to this one site instead of sending out elaborate emails. Yes, I just love it when someone practically reaches out and holds my hand through a process. Next best thing to a wizard!

    Read the article

  • Sony steps back from Linux?

    - by EmbeddedInsider
    In Cnet today I saw something interesting: According to Sony, it plans to release PlayStation 3 firmware version 3.21 on Thursday to achieve one goal: eliminate the "Other OS" option currently available in all pre-Slim models of the video game console. The feature allowed PS3 owners to install an operating system--in almost every case, Linux--onto the PlayStation 3. No surprise. l  Sony is a company heavily invested with legacy IP (games, all that music and Blueray).  They know that content can be nowhere near the GPL. http://news.cnet.com/8301-13506_3-10471356-17.html?tag=rtcol;pop

    Read the article

  • How to disable tap to click in Lubuntu 13.10

    - by radiomasten
    Tap to click is usually the first thing I disable when I have installed a new OS, but this time I couldn't get rid of it. In earlier versions of Lubuntu, I was able to disable it by writing "@synclient MaxTapTime=0" to /etc/xdg/lxsession/Lubuntu/autostart and save. But in Lubuntu 13.10 this method doesn't work any more. I can't find any solution on the internet either. (If there was a checkbox in "mouse and keyboard" preferences in LXDE to turn tap to click on/off permanently, like in Unity, that would make both lovers and haters of this divisive feature happy. I don't understand how this feature could be thought of as something everybody wants.)

    Read the article

  • Hot Java Content

    - by Tori Wieldt
    It's August, summertime in the United States, and time for many of us to go on vacation. (You'll have to find my personal account to see more photos of the Monterey Bay Aquarium.) Here's some great Java content that you may have missed while I was gone: Blogs  Project Jigsaw: Late for the train: The Q&A JSR 355 Final Release, and moves JCP to version 2.9Oracle releases JDK for Linux ARM, JRE for Mac OS XArchitects and Architecture at JavaOne 2012Java Champions at JavaOne 2012 Podcasts & Videos Java Spotlight Episode 96: Johan Vos on Glassfish and JavaFXJava Spotlight Episode 94: Kirk Pepperdine on Java Performance TuningJava Spotlight Episode 93: Jonathan Giles on JavaFX 2.2 UI ControlsVideo: JavaFX Canvas Node July/August Java Magazine (free subscription) Developer Power: Web-based Development ToolsFork/Join Framework for Client Java ApplicationsIntro to Web Service SecurityHow to Modify javacOracle's Berkeley DB Java Edition's Java API and more. Java Magazine is available on the App Store and the Android Market. Get all this great Java content while it's as hot as a North American (non-San Franciscian) summer. 

    Read the article

  • Using the Script Component as a Conditional Split

    This is a quick walk through on how you can use the Script Component to perform Conditional Split like behaviour, splitting your data across multiple outputs. We will use C# code to decide what does flows to which output, rather than the expression syntax of the Conditional Split transformation. Start by setting up the source. For my example the source is a list of SQL objects from sys.objects, just a quick way to get some data: SELECT type, name FROM sys.objects type name S syssoftobjrefs F FK_Message_Page U Conference IT queue_messages_23007163 Shown above is a small sample of the data you could expect to see. Once you have setup your source, add the Script Component, selecting Transformation when prompted for the type, and connect it up to the source. Now open the component, but don’t dive into the script just yet. First we need to select some columns. Select the Input Columns page and then select the columns we want to uses as part of our filter logic. You don’t need to choose columns that you may want later, this is just the columns used in the script itself. Next we need to add our outputs. Select the Inputs and Outputs page.You get one by default, but we need to add some more, it wouldn’t be much of a split otherwise. For this example we’ll add just one more. Click the Add Output button, and you’ll see a new output is added. Now we need to set some properties, so make sure our new Output 1 is selected. In the properties grid change the SynchronousInputID property to be our input Input 0, and  change the ExclusionGroup property to 1. Now select Ouput 0 and change the ExclusionGroup property to 2. This value itself isn’t important, provided each output has a different value other than zero. By setting this property on both outputs it allows us to split the data down one or the other, making each exclusive. If we left it to 0, that output would get all the rows. It can be a useful feature allowing you to copy selected rows to one output whilst retraining the full set of data in the other. Now we can go back to the Script page and start writing some code. For the example we will do a very simple test, if the value of the type column is U, for user table, then it goes down the first output, otherwise it ends up in the other. This mimics the exclusive behaviour of the conditional split transformation. public override void Input0_ProcessInputRow(Input0Buffer Row) { // Filter all user tables to the first output, // the remaining objects down the other if (Row.type.Trim() == "U") { Row.DirectRowToOutput0(); } else { Row.DirectRowToOutput1(); } } The code itself is very simple, a basic if clause that determines which of the DirectRowToOutput methods we call, there is one for each output. Of course you could write a lot more code to implement some very complex logic, but the final direction is still just a method call. If we now close the script component, we can hook up the outputs and test the package. Your numbers will vary depending on the sample database but as you can see we have clearly split out input data into two outputs. As a final tip, when adding the outputs I would normally rename them, changing the Name in the Properties grid. This means the generated methods follow the pattern as do the path label shown on the design surface, making everything that much easier to recognise.

    Read the article

  • How to remove desktop environments?

    - by MyNameIs...
    I installed few environments that I wanted to try out on Ubuntu 12.04, but none of them worked at all. It could be that I installed them all at the same time, meaning the OS didn't get a chance to work everything out, but either way, they didn't work. I would now like to remove them. The one's that I installed are Fluxbox, OpenBox, XFCE, and MATE. I installed them through the help of this site. Everything seemed to have been working properly until I actually tried to use the shells and nothing loaded at all. Except for Fluxbox, I think that one worked. I want to know of any way to repair or perhaps just remove the packages entirely. I might have already removed them because I did the apt-get remove command on all of them, but they were still in the list on the login screen.

    Read the article

  • Ubiquity crashes when installing from CD

    - by Ashes
    I didn't want to take any risks so I ordered a CD from Canonical to get Ubuntu. Thing is, another CD was given to me about 2 days before the CD from Canonical got to me, so I installed Ubuntu 10.10 but there was a problem with the login screen (When the Ubuntu logo should be displayed, it wasn't, instead it would just say "Ubuntu 10.10") so I decided to reinstall Ubuntu 10.10 with the CD that arrived a few days later. Whenever it's finishing the installation, the installer (ubiquity) crashes, or sometimes it gets to the part where the boot loader should be installed and for some reason it is unable to install the boot loader (if I choose not to install it, I don't get how to start Ubuntu, since you have to reboot my laptop after the installation is over). I'm currently running Ubuntu 10.10 from the CD I ordered, since I have no other OS on this laptop.

    Read the article

  • virtualbox host | Ubuntu vs XP

    - by iambriansreed
    In order to lengthen the lifespan of my machine I am replacing the weakest link, the hard drive and installing a new OS. I had planned on using xp pro as my virtualbox host and ubuntu as guest. After messing with ubuntu desktop and server I am really impressed and am thinking of reversing the virtualbox setup; ubuntu host xp guest. I would use XP for Adobe Fireworks, Netflix, and iTunes (maybe) that's pretty much it. Any reason not to do ubuntu host with xp guest? I know the xp vbox will run slower as a guest but really how much slower? It's a desktop. 4gb ram, 500gb disk, Pent D 3.2 ghz

    Read the article

  • BizTalk Testing Series - The xpath Function

    - by Michael Stephenson
    Background While the xpath function in a BizTalk orchestration is a very powerful feature I have often come across the situation where someone has hard coded an xpath expression in an orchestration. If you have read some of my previous posts about testing I've tried to get across the general theme like test-driven or test-assisted development approaches where the underlying principle is that your building up your solution of small well tested units that are put together and the resulting solution is usually quite robust. You will be finding more bugs within your unit tests and fewer outside of your team. The thing I don't like about the xpath functions usual usage is when you come across an orchestration which has something like the below snippet in an expression or assign shape: string result = xpath(myMessage,"string(//Order/OrderItem/ProductName)"); My main issue with this is that the xpath statement is hard coded in the orchestration and you don't really know it works until you are running the orchestration. Some of the problems I think you end up with are: You waste time with lengthy debugging of the orchestration when your statement isn't working You might not know the function isn't working quite as expected because the testable unit around it is big You are much more open to regression issues if your schema changes     Approach to Testing The technique I usually follow is to hold the xpath statement as a constant in a helper class or to format a constant with a helper function to get the actual xpath statement. It is then used by the orchestration like follows. string result = xpath(myMessage, MyHelperClass.ProductNameXPathStatement); This means that because the xpath statement is available outside of the orchestration it now becomes testable in its own right. This means: I can test it in its own right I'm less likely to waste time tracking down problems caused by an error in the statement I can reduce the risk or regression issuess I'm now able to implement some testing around my xpath statements which usually are something like the following:    The test will use a sample xml file The sample will be validated against the schema The test will execute the xpath statement and then check the results are as expected     Walk-through BizTalk uses the XPathNavigator internally behind the xpath function to implement the queries you will usually use using the navigators select or evaluate functions. In the sample (link at bottom) I have a small solution which contains a schema from which I have generated a sample instance. I will then use this instance as the basis for my tests.     In the below diagram you can see the helper class which I've encapsulated my xpath expressions in, and some helper functions which will format the expression in the case of a repeating node which would want to inject an index into the xpath query.             I have then created a test class which has some functions to execute some queries against my sample xml file. An example of this is below.         In the test class I have a couple of helper functions which will execute the xpath expressions in a similar way to BizTalk. You could have a proper helper class to do this if you wanted.         You can see now in the BizTalk expression editor I can use these functions alongside the xpath function.         Conclusion I hope you can see with very little effort you can make your life much easier by testing xpath statements outside of an orchestration rather than using them directly hard coded into the orchestration.     This can also save you lots of pain longer term because your build should break if your schema changes unexpectedly causing these xpath tests to fail where as your tests around the orchestration will be more difficult to troubleshoot and workout the cause of the problem.     Sample Link The sample is available from the following link: http://code.msdn.microsoft.com/testbtsxpathfunction     Other Tools On the subject of using the xpath function, if you don't already use it the below tool is very useful for creating your xpath statements (thanks BizBert) http://www.bizbert.com/bizbert/2007/11/30/XPath+The+Hidden+Language+Of+BizTalk.aspx

    Read the article

  • Use the latest technology or use a mature technology as a developer?

    - by Ted Wong
    I would like to develop an application for a group of people to use. I have decided to develop using python, but I am thinking of using python 2.X or python 3.X. If I use python 2.X, I need to upgrade it for the future... But it is more mature, and has many tools and libraries. If I develop using 3.X, I don't need to think of future integration, but currenttly it doesn't have many libraries, even a python to executable is not ready for all platforms. Also, one of the considerations is that it is a brand new application, so I don't have the history burden to maintain the old libraries. Any recommendation on this dilemma? More information about this application: Native application Time for maintenance: 5 years+ Library/Tools must need: don't have idea, yet. Must need feature that in 2.X: Convert to an executable for both Windows and Mac OS X

    Read the article

  • How and why are operating systems bootable from a USB?

    - by user114638
    I'm told to install ubuntu on my laptop for work in order to learn shell scripting. I've read the best way is to install ubuntu on a USB stick and partition my HDD. I'm curious how an OS is bootable from a USB stick? Is it literally just a small interface that can be put anywhere? This reminds me of a time I downloaded a game onto my USB stick, when I brought it to my friends house he told me it will run slow if I don't install it and only run it from the usb, is this different from running ubuntu from a usb? Will ubuntu be slow?

    Read the article

  • Intel NUC Video Blur

    - by donopj2
    I recently purchased the D34010WYKH NUC and I figured this would be a great time to make the jump to a Linux based system. I'm running Ubuntu 14.04, and I'm having an issue with video rendering that is driving me mad. Essentially videos (all 1080p mkv files) appear to be slowly blurring, and its most noticeable when the camera remains on a scene for a long period of time. Then all of a sudden the video will correct the blur and the image will be sharp, only to begin happening again followed by more sudden and noticeable corrections. I have seen the exact same issue in both VLC and XBMC and across several different videos. I have installed the latest Intel graphics drivers, and searched the web but to be honest I'm not sure how to accurately describe this problem. I'm also quite new to the OS, so my experience tinkering is limited. Has anyone experienced this type of issue before? Can it be resolved?

    Read the article

  • How to Install the MATE Desktop & Go Back to GNOME 2 on Ubuntu

    - by Chris Hoffman
    If you long for the days of GNOME 2 and just can’t get along with Unity or GNOME 3, MATE is here to save you. It’s an actively developed fork of GNOME 2, and it’s easily installable on Ubuntu. MATE isn’t available in Ubuntu’s repositories, but the MATE developers offer an official repository for Ubuntu. Unlike some methods that recommend you use Linux Mint’s repository on Ubuntu, this won’t mess up your system. How to Own Your Own Website (Even If You Can’t Build One) Pt 1 What’s the Difference Between Sleep and Hibernate in Windows? Screenshot Tour: XBMC 11 Eden Rocks Improved iOS Support, AirPlay, and Even a Custom XBMC OS

    Read the article

  • Help making TP-Link Wireless USB Adapter TL-WN723N work consistently

    - by Savvas Katseas
    I've only recently made the jump to Ubuntu, using 11.10/64 as my only desktop OS. Everything seems to be working fine, except my USB Wireless adapter, TP-Link's TL-WN723N which is randomly connecting and disconnecting. The connection time appears to be random, too: I've experienced hours of connectivity and lots of connections/disconnections. I've tried searching for a solution, but what I find doesn't concern this specific USB adapter. I'd like some help identifying the problem... I've also recently switched to using a D-Link router as a wireless hub, which creates its own wireless/n network. Unfortunately this didn't solve my problems, as the new n network can be joined, but there's no connectivity to the internet. I know that's not much info to help others solve my problem, so please let me know of what else I can provide to make this a better question -- and possibly help others facing similar trouble. lsusb reports that I'm using Realtek Semiconductor Corp. RTL8188SU 802.11n WLAN Adapter

    Read the article

  • Nagging As A Strategy For Better Linking: -z guidance

    - by user9154181
    The link-editor (ld) in Solaris 11 has a new feature that we call guidance that is intended to help you build better objects. The basic idea behind guidance is that if (and only if) you request it, the link-editor will issue messages suggesting better options and other changes you might make to your ld command to get better results. You can choose to take the advice, or you can disable specific types of guidance while acting on others. In some ways, this works like an experienced friend leaning over your shoulder and giving you advice — you're free to take it or leave it as you see fit, but you get nudged to do a better job than you might have otherwise. We use guidance to build the core Solaris OS, and it has proven to be useful, both in improving our objects, and in making sure that regressions don't creep back in later. In this article, I'm going to describe the evolution in thinking and design that led to the implementation of the -z guidance option, as well as give a brief description of how it works. The guidance feature issues non-fatal warnings. However, experience shows that once developers get used to ignoring warnings, it is inevitable that real problems will be lost in the noise and ignored or missed. This is why we have a zero tolerance policy against build noise in the core Solaris OS. In order to get maximum benefit from -z guidance while maintaining this policy, I added the -z fatal-warnings option at the same time. Much of the material presented here is adapted from the arc case: PSARC 2010/312 Link-editor guidance The History Of Unfortunate Link-Editor Defaults The Solaris link-editor is one of the oldest Unix commands. It stands to reason that this would be true — in order to write an operating system, you need the ability to compile and link code. The original link-editor (ld) had defaults that made sense at the time. As new features were needed, command line option switches were added to let the user use them, while maintaining backward compatibility for those who didn't. Backward compatibility is always a concern in system design, but is particularly important in the case of the tool chain (compilers, linker, and related tools), since it is a basic building block for the entire system. Over the years, applications have grown in size and complexity. Important concepts like dynamic linking that didn't exist in the original Unix system were invented. Object file formats changed. In the case of System V Release 4 Unix derivatives like Solaris, the ELF (Extensible Linking Format) was adopted. Since then, the ELF system has evolved to provide tools needed to manage today's larger and more complex environments. Features such as lazy loading, and direct bindings have been added. In an ideal world, many of these options would be defaults, with rarely used options that allow the user to turn them off. However, the reality is exactly the reverse: For backward compatibility, these features are all options that must be explicitly turned on by the user. This has led to a situation in which most applications do not take advantage of the many improvements that have been made in linking over the last 20 years. If their code seems to link and run without issue, what motivation does a developer have to read a complex manpage, absorb the information provided, choose the features that matter for their application, and apply them? Experience shows that only the most motivated and diligent programmers will make that effort. We know that most programs would be improved if we could just get you to use the various whizzy features that we provide, but the defaults conspire against us. We have long wanted to do something to make it easier for our users to use the linkers more effectively. There have been many conversations over the years regarding this issue, and how to address it. They always break down along the following lines: Change ld Defaults Since the world would be a better place the newer ld features were the defaults, why not change things to make it so? This idea is simple, elegant, and impossible. Doing so would break a large number of existing applications, including those of ISVs, big customers, and a plethora of existing open source packages. In each case, the owner of that code may choose to follow our lead and fix their code, or they may view it as an invitation to reconsider their commitment to our platform. Backward compatibility, and our installed base of working software, is one of our greatest assets, and not something to be lightly put at risk. Breaking backward compatibility at this level of the system is likely to do more harm than good. But, it sure is tempting. New Link-Editor One might create a new linker command, not called 'ld', leaving the old command as it is. The new one could use the same code as ld, but would offer only modern options, with the proper defaults for features such as direct binding. The resulting link-editor would be a pleasure to use. However, the approach is doomed to niche status. There is a vast pile of exiting code in the world built around the existing ld command, that reaches back to the 1970's. ld use is embedded in large and unknown numbers of makefiles, and is used by name by compilers that execute it. A Unix link-editor that is not named ld will not find a majority audience no matter how good it might be. Finally, a new linker command will eventually cease to be new, and will accumulate its own burden of backward compatibility issues. An Option To Make ld Do The Right Things Automatically This line of reasoning is best summarized by a CR filed in 2005, entitled 6239804 make it easier for ld(1) to do what's best The idea is to have a '-z best' option that unchains ld from its backward compatibility commitment, and allows it to turn on the "best" set of features, as determined by the authors of ld. The specific set of features enabled by -z best would be subject to change over time, as requirements change. This idea is more realistic than the other two, but was never implemented because it has some important issues that we could never answer to our satisfaction: The -z best proposal assumes that the user can turn it on, and trust it to select good options without the user needing to be aware of the options being applied. This is a fallacy. Features such as direct bindings require the user to do some analysis to ensure that the resulting program will still operate properly. A user who is willing to do the work to verify that what -z best does will be OK for their application is capable of turning on those features directly, and therefore gains little added benefit from -z best. The intent is that when a user opts into -z best, that they understand that z best is subject to sometimes incompatible evolution. Experience teaches us that this won't work. People will use this feature, the meaning of -z best will change, code that used to build will fail, and then there will be complaints and demands to retract the change. When (not if) this occurs, we will of course defend our actions, and point at the disclaimer. We'll win some of those debates, and lose others. Ultimately, we'll end up with -z best2 (-z better), or other compromises, and our goal of simplifying the world will have failed. The -z best idea rolls up a set of features that may or may not be related to each other into a unit that must be taken wholesale, or not at all. It could be that only a subset of what it does is compatible with a given application, in which case the user is expected to abandon -z best and instead set the options that apply to their application directly. In doing so, they lose one of the benefits of -z best, that if you use it, future versions of ld may choose a different set of options, and automatically improve the object through the act of rebuilding it. I drew two conclusions from the above history: For a link-editor, backward compatibility is vital. If a given command line linked your application 10 years ago, you have every reason to expect that it will link today, assuming that the libraries you're linking against are still available and compatible with their previous interfaces. For an application of any size or complexity, there is no substitute for the work involved in examining the code and determining which linker options apply and which do not. These options are largely orthogonal to each other, and it can be reasonable not to use any or all of them, depending on the situation, even in modern applications. It is a mistake to tie them together. The idea for -z guidance came from consideration of these points. By decoupling the advice from the act of taking the advice, we can retain the good aspects of -z best while avoiding its pitfalls: -z guidance gives advice, but the decision to take that advice remains with the user who must evaluate its merit and make a decision to take it or not. As such, we are free to change the specific guidance given in future releases of ld, without breaking existing applications. The only fallout from this will be some new warnings in the build output, which can be ignored or dealt with at the user's convenience. It does not couple the various features given into a single "take it or leave it" option, meaning that there will never be a need to offer "-zguidance2", or other such variants as things change over time. Guidance has the potential to be our final word on this subject. The user is given the flexibility to disable specific categories of guidance without losing the benefit of others, including those that might be added to future versions of the system. Although -z fatal-warnings stands on its own as a useful feature, it is of particular interest in combination with -z guidance. Used together, the guidance turns from advice to hard requirement: The user must either make the suggested change, or explicitly reject the advice by specifying a guidance exception token, in order to get a build. This is valuable in environments with high coding standards. ld Command Line Options The guidance effort resulted in new link-editor options for guidance and for turning warnings into fatal errors. Before I reproduce that text here, I'd like to highlight the strategic decisions embedded in the guidance feature: In order to get guidance, you have to opt in. We hope you will opt in, and believe you'll get better objects if you do, but our default mode of operation will continue as it always has, with full backward compatibility, and without judgement. Guidance suggestions always offers specific advice, and not vague generalizations. You can disable some guidance without turning off the entire feature. When you get guidance warnings, you can choose to take the advice, or you can specify a keyword to disable guidance for just that category. This allows you to get guidance for things that are useful to you, without being bothered about things that you've already considered and dismissed. As the world changes, we will add new guidance to steer you in the right direction. All such new guidance will come with a keyword that let's you turn it off. In order to facilitate building your code on different versions of Solaris, we quietly ignore any guidance keywords we don't recognize, assuming that they are intended for newer versions of the link-editor. If you want to see what guidance tokens ld does and does not recognize on your system, you can use the ld debugging feature as follows: % ld -Dargs -z guidance=foo,nodefs debug: debug: Solaris Linkers: 5.11-1.2275 debug: debug: arg[1] option=-D: option-argument: args debug: arg[2] option=-z: option-argument: guidance=foo,nodefs debug: warning: unrecognized -z guidance item: foo The -z fatal-warning option is straightforward, and generally useful in environments with strict coding standards. Note that the GNU ld already had this feature, and we accept their option names as synonyms: -z fatal-warnings | nofatal-warnings --fatal-warnings | --no-fatal-warnings The -z fatal-warnings and the --fatal-warnings option cause the link-editor to treat warnings as fatal errors. The -z nofatal-warnings and the --no-fatal-warnings option cause the link-editor to treat warnings as non-fatal. This is the default behavior. The -z guidance option is defined as follows: -z guidance[=item1,item2,...] Provide guidance messages to suggest ld options that can improve the quality of the resulting object, or which are otherwise considered to be beneficial. The specific guidance offered is subject to change over time as the system evolves. Obsolete guidance offered by older versions of ld may be dropped in new versions. Similarly, new guidance may be added to new versions of ld. Guidance therefore always represents current best practices. It is possible to enable guidance, while preventing specific guidance messages, by providing a list of item tokens, representing the class of guidance to be suppressed. In this way, unwanted advice can be suppressed without losing the benefit of other guidance. Unrecognized item tokens are quietly ignored by ld, allowing a given ld command line to be executed on a variety of older or newer versions of Solaris. The guidance offered by the current version of ld, and the item tokens used to disable these messages, are as follows. Specify Required Dependencies Dynamic executables and shared objects should explicitly define all of the dependencies they require. Guidance recommends the use of the -z defs option, should any symbol references remain unsatisfied when building dynamic objects. This guidance can be disabled with -z guidance=nodefs. Do Not Specify Non-Required Dependencies Dynamic executables and shared objects should not define any dependencies that do not satisfy the symbol references made by the dynamic object. Guidance recommends that unused dependencies be removed. This guidance can be disabled with -z guidance=nounused. Lazy Loading Dependencies should be identified for lazy loading. Guidance recommends the use of the -z lazyload option should any dependency be processed before either a -z lazyload or -z nolazyload option is encountered. This guidance can be disabled with -z guidance=nolazyload. Direct Bindings Dependencies should be referenced with direct bindings. Guidance recommends the use of the -B direct, or -z direct options should any dependency be processed before either of these options, or the -z nodirect option is encountered. This guidance can be disabled with -z guidance=nodirect. Pure Text Segment Dynamic objects should not contain relocations to non-writable, allocable sections. Guidance recommends compiling objects with Position Independent Code (PIC) should any relocations against the text segment remain, and neither the -z textwarn or -z textoff options are encountered. This guidance can be disabled with -z guidance=notext. Mapfile Syntax All mapfiles should use the version 2 mapfile syntax. Guidance recommends the use of the version 2 syntax should any mapfiles be encountered that use the version 1 syntax. This guidance can be disabled with -z guidance=nomapfile. Library Search Path Inappropriate dependencies that are encountered by ld are quietly ignored. For example, a 32-bit dependency that is encountered when generating a 64-bit object is ignored. These dependencies can result from incorrect search path settings, such as supplying an incorrect -L option. Although benign, this dependency processing is wasteful, and might hide a build problem that should be solved. Guidance recommends the removal of any inappropriate dependencies. This guidance can be disabled with -z guidance=nolibpath. In addition, -z guidance=noall can be used to entirely disable the guidance feature. See Chapter 7, Link-Editor Quick Reference, in the Linker and Libraries Guide for more information on guidance and advice for building better objects. Example The following example demonstrates how the guidance feature is intended to work. We will build a shared object that has a variety of shortcomings: Does not specify all it's dependencies Specifies dependencies it does not use Does not use direct bindings Uses a version 1 mapfile Contains relocations to the readonly allocable text (not PIC) This scenario is sadly very common — many shared objects have one or more of these issues. % cat hello.c #include <stdio.h> #include <unistd.h> void hello(void) { printf("hello user %d\n", getpid()); } % cat mapfile.v1 # This version 1 mapfile will trigger a guidance message % cc hello.c -o hello.so -G -M mapfile.v1 -lelf As you can see, the operation completes without error, resulting in a usable object. However, turning on guidance reveals a number of things that could be better: % cc hello.c -o hello.so -G -M mapfile.v1 -lelf -zguidance ld: guidance: version 2 mapfile syntax recommended: mapfile.v1 ld: guidance: -z lazyload option recommended before first dependency ld: guidance: -B direct or -z direct option recommended before first dependency Undefined first referenced symbol in file getpid hello.o (symbol belongs to implicit dependency /lib/libc.so.1) printf hello.o (symbol belongs to implicit dependency /lib/libc.so.1) ld: warning: symbol referencing errors ld: guidance: -z defs option recommended for shared objects ld: guidance: removal of unused dependency recommended: libelf.so.1 warning: Text relocation remains referenced against symbol offset in file .rodata1 (section) 0xa hello.o getpid 0x4 hello.o printf 0xf hello.o ld: guidance: position independent (PIC) code recommended for shared objects ld: guidance: see ld(1) -z guidance for more information Given the explicit advice in the above guidance messages, it is relatively easy to modify the example to do the right things: % cat mapfile.v2 # This version 2 mapfile will not trigger a guidance message $mapfile_version 2 % cc hello.c -o hello.so -Kpic -G -Bdirect -M mapfile.v2 -lc -zguidance There are situations in which the guidance does not fit the object being built. For instance, you want to build an object without direct bindings: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance ld: guidance: -B direct or -z direct option recommended before first dependency ld: guidance: see ld(1) -z guidance for more information It is easy to disable that specific guidance warning without losing the overall benefit from allowing the remainder of the guidance feature to operate: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance=nodirect Conclusions The linking guidelines enforced by the ld guidance feature correspond rather directly to our standards for building the core Solaris OS. I'm sure that comes as no surprise. It only makes sense that we would want to build our own product as well as we know how. Solaris is usually the first significant test for any new linker feature. We now enable guidance by default for all builds, and the effect has been very positive. Guidance helps us find suboptimal objects more quickly. Programmers get concrete advice for what to change instead of vague generalities. Even in the cases where we override the guidance, the makefile rules to do so serve as documentation of the fact. Deciding to use guidance is likely to cause some up front work for most code, as it forces you to consider using new features such as direct bindings. Such investigation is worthwhile, but does not come for free. However, the guidance suggestions offer a structured and straightforward way to tackle modernizing your objects, and once that work is done, for keeping them that way. The investment is often worth it, and will replay you in terms of better performance and fewer problems. I hope that you find guidance to be as useful as we have.

    Read the article

  • How to override the new limited keyboard repeat rate limit?

    - by Olivier Pons
    I may be an alien around here, but here's my problem: the speed limit on old Ubuntu releases (= before 11) was very very fast. It was really great for me. Now, on Ubuntu 11, they may have thought: "who will ever want that speed? Nobody! So let's put the maximum speed to a lower limit". It's so stupid that they tried to narrow down the speed to some other famous OS. If Linux is more powerful, why remove some of its power? I don't get that. So is there any way to override that speed limit and get my keyboard as fast as it is on other previous versions?

    Read the article

  • Packard Bell Easynote NJ65 not saving gamma Setup.

    - by Pablo Gomez
    Hi. There seems to be a issue with the Easynote range of PB laptops. Since Ubuntu 10.10 does not use a Xorg.conf file to save your gamma / resolution /brightness setting, everytime I turn on my laptop I have to open up a terminal window and use the x-gamma command to set it up to my personal preference. Is there a way to create a configuration file which can save that into the system everytime I load up the OS? When I used to have a Compaq Presario (an F564LA with integrated nVIDIA graphics), I could save a config. file into the system which loaded up everything on startup To those who don't know the spec's for a NJ65 laptop, I'll provide them Processor: Intel® Pentium Dual Core @ 2.2 GHZ Video: Integrated Intel® GMA 4500MHD graphics HDD: 320GB SATA RAM: 2GB DDR2

    Read the article

  • The Complete Window Cameo Collection from the Original Batman Series [Video]

    - by Asian Angel
    Are you a fan of the classic Batman T.V. series? Think you know who all the guest stars were that did window cameos in the series? Then put your knowledge to the test with this fun compilation video by YouTube user loomyaire. You can check your answers (or find out the names of the ones you may have missed) at the links below! The Complete 14 Batman Window Cameos [via BoingBoing] What’s the Difference Between Sleep and Hibernate in Windows? Screenshot Tour: XBMC 11 Eden Rocks Improved iOS Support, AirPlay, and Even a Custom XBMC OS How To Be Your Own Personal Clone Army (With a Little Photoshop)

    Read the article

  • Getting Current Native Thread

    - by Ricardo Peres
    The native OS threads running in the current process are exposed through the Threads property of the Process class. Please note that this is not the same as a managed thread, these are the actual native threads running on the operating system. In order to get a pointer to the current executing thread, we must use P/Invoke. Here's how we do it: [DllImport("kernel32.dll")] public static extern UInt32 GetCurrentThreadId(); UInt32 id = GetCurrentThreadId(); ProcessThread thread = Process.GetCurrentProcess().Threads.Cast().Where(t = t.Id == id).Single(); SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.CSharp.aliases = ['c#', 'c-sharp', 'csharp']; SyntaxHighlighter.all();

    Read the article

  • Reinstall Ubuntu keeping my data intact

    - by Magnus
    I have recently upgraded my desktop OS from ubntu 12.04 to 12.10 (complete reinstall). Before the switch i made a list of all programs installed on my Ubuntu 12.04. sudo dpkg --get-selections > file After that i reinstalled Ubuntu 12.10 and when all was done i performed the following command: sudo dpkg --set-selections < file sudo apt-get dselect-upgrade Here is when the problems start, I get several warnings like this when performing the commands above: dpkg: warning: package not in database at line xxx and many of the programs is not installed. I don't know what the line means. I have serched the web and it seems that I'm not the only one suffering from this but I have not find any solution that worked for me. Any ideas what is causing this? Regards Magnus Örberg

    Read the article

  • What is the version numbering logic for open source developers managing software releases?

    - by Stephen Myall
    I guess this is more of a general question that I cant find the the answer to anywhere. What is the version numbering logic for open source developers managing software releases and is there any governance or guidance I can read up on. The origins of this question comes from me reviewing and researching software on countless websites that I would like to use on my Ubuntu OS. Through experience, I am learning some sites are much better than others explaining if a release is a stable, experimental or maintenance release but these explanations are not consistent with any version numbering logic I am familiar with.

    Read the article

  • Not getting GUI mode on ubuntu 12.04 desktop edition after installation on vmware workstation 7

    - by Salil Naik
    I installed Ubuntu 12.04 desktop edition on my PC using VMWare workstation 7. I assigned 1GB RAM and 20 GB Hard disk to Ubuntu. While starting Ubuntu virtual machine, It is not starting up with GUI mode. In is prompting me my Login ID in textual mode always. After waiting for long time as well the GUI mode is not appearing. I tried running sudo apt-get install updates sudo apt-get install xinit sudo apt-get install ubuntu-desktop Honestly, i don't know the meaning of all these.I am very new to ubuntu.Please help me here what to do? Below is my Laptop configuration OS: Genuine Windows 7 Home Basic(64 bit) RAM: 3 GB Processor: Intel core i3 Regards Salil

    Read the article

< Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >