Search Results

Search found 5757 results on 231 pages for 'refresh policy'.

Page 164/231 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • No updates in my Raring

    - by zatloukal-frantisek
    Since upgrade from Quantal to raring i am not recieving any updates. For example firefox package - I have version 17 installed and apt-get update && apt-get upgrade does not find updates. And output from show-versions: fanys@fanys-netbook:~$ apt-show-versions firefox firefox 17.0+build2-0ubuntu0.12.10.1 newer than version in archive fanys@fanys-netbook:~$ apt-show-versions unity unity/raring uptodate 6.12.0-0ubuntu1 I tried to remove contents of /var/lib/apt/lists/ and redo package refresh(apt-get update). But still same issue. /etc/apt/sources.list contents: # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://cz.archive.ubuntu.com/ubuntu/ raring main restricted deb-src http://cz.archive.ubuntu.com/ubuntu/ raring main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://cz.archive.ubuntu.com/ubuntu/ raring-updates main restricted deb-src http://cz.archive.ubuntu.com/ubuntu/ raring-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://cz.archive.ubuntu.com/ubuntu/ raring universe deb-src http://cz.archive.ubuntu.com/ubuntu/ raring universe deb http://cz.archive.ubuntu.com/ubuntu/ raring-updates universe deb-src http://cz.archive.ubuntu.com/ubuntu/ raring-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://cz.archive.ubuntu.com/ubuntu/ raring multiverse deb-src http://cz.archive.ubuntu.com/ubuntu/ raring multiverse deb http://cz.archive.ubuntu.com/ubuntu/ raring-updates multiverse deb-src http://cz.archive.ubuntu.com/ubuntu/ raring-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://security.ubuntu.com/ubuntu raring-security main restricted deb-src http://security.ubuntu.com/ubuntu raring-security main restricted deb http://security.ubuntu.com/ubuntu raring-security universe deb-src http://security.ubuntu.com/ubuntu raring-security universe deb http://security.ubuntu.com/ubuntu raring-security multiverse deb-src http://security.ubuntu.com/ubuntu raring-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http://archive.canonical.com/ubuntu raring partner deb-src http://archive.canonical.com/ubuntu raring partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. deb http://extras.ubuntu.com/ubuntu raring main deb-src http://extras.ubuntu.com/ubuntu raring main deb http://cz.archive.ubuntu.com/ubuntu/ raring-proposed main universe restricted multiverse deb http://cz.archive.ubuntu.com/ubuntu/ raring-backports main universe restricted multiverse I have no updates for 4 days of dist-upgrade. There is one package kept in actual version: libexttextcat-data Thanks in advance

    Read the article

  • How to Add Control Panel to “My Computer” in Windows 7 or Vista

    - by The Geek
    Back in the Windows XP days, you could easily add Control Panel to My Computer with a simple checkbox in the folder view settings. Windows 7 and Vista don’t make this quite as easy, but there’s still a way to get it back. To make this tweak, we’ll be doing a quick registry hack, but there’s a downloadable version provided as well. Manual Registry Tweak to Add Control Panel Open up regedit.exe through the start menu search or run box, and then browse down to the following key: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\MyComputer\NameSpace Now that you’re there, you’ll need to right-click and create a new key… If you want to add the regular Control Panel view, with the categories, you’ll need to use one GUID as the name of the key. If you want the icon view instead, you can use the other key. Here they are: Category View:  {26EE0668-A00A-44D7-9371-BEB064C98683} Icon View: {21EC2020-3AEA-1069-A2DD-08002B30309D} Once you’re done, it should look like this: Now over in the Computer view, just hit the F5 key to refresh the panel, and you should see the new icon pop up in the list: Now when you click on the icon you’ll be taken to Control Panel. If you didn’t know how to change the view before, you can use the drop-down box on the right-hand side to switch between Category and icon view. Downloadable Registry Hack Rather than deal with manual registry editing, you can simply download the file, extract it, and then either double-click on the AddCategoryControlPanel.reg to add the Category view icon, or AddIconControlPanel.reg to add the other icon. There’s an uninstall script provided for each. Download ControlPanelMyComputer Registry Hack from howtogeek.com Similar Articles Productive Geek Tips Disable User Account Control (UAC) the Easy Way on Win 7 or VistaHow To Figure Out Your PC’s Host Name From the Command PromptRestore Missing Desktop Icons in Windows 7 or VistaNew Vista Syntax for Opening Control Panel Items from the Command-lineAdd Registry Editor to Control Panel TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause

    Read the article

  • Advice Needed: Developers blocked by waiting on code to merge from another branch using GitFlow

    - by fogwolf
    Our team just made the switch from FogBugz & Kiln/Mercurial to Jira & Stash/Git. We are using the Git Flow model for branching, adding subtask branches off of feature branches (relating to Jira subtasks of Jira features). We are using Stash to assign a reviewer when we create a pull request to merge back into the parent branch (usually develop but for subtasks back into the feature branch). The problem we're finding is that even with the best planning and breakdown of feature cases, when multiple developers are working together on the same feature, say on the front-end and back-end, if they are working on interdependent code that is in separate branches one developer ends up blocking the other. We've tried pulling between each others' branches as we develop. We've also tried creating local integration branches each developer can pull from multiple branches to test the integration as they develop. Finally, and this seems to work possibly the best for us so far, though with a bit more overhead, we have tried creating an integration branch off of the feature branch right off the bat. When a subtask branch (off of the feature branch) is ready for a pull request and code review, we also manually merge those change sets into this feature integration branch. Then all interested developers are able to pull from that integration branch into other dependent subtask branches. This prevents anyone from waiting for any branch they are dependent upon to pass code review. I know this isn't necessarily a Git issue - it has to do with working on interdependent code in multiple branches, mixed with our own work process and culture. If we didn't have the strict code-review policy for develop (true integration branch) then developer 1 could merge to develop for developer 2 to pull from. Another complication is that we are also required to do some preliminary testing as part of the code review process before handing the feature off to QA.This means that even if front-end developer 1 is pulling directly from back-end developer 2's branch as they go, if back-end developer 2 finishes and his/her pull request is sitting in code review for a week, then front-end developer 2 technically can't create his pull request/code review because his/her code reviewer can't test because back-end developer 2's code hasn't been merged into develop yet. Bottom line is we're finding ourselves in a much more serial rather than parallel approach in these instance, depending on which route we go, and would like to find a process to use to avoid this. Last thing I'll mention is we realize by sharing code across branches that haven't been code reviewed and finalized yet we are in essence using the beta code of others. To a certain extent I don't think we can avoid that and are willing to accept that to a degree. Anyway, any ideas, input, etc... greatly appreciated. Thanks!

    Read the article

  • Where to draw the line between development-led security and administration-led security?

    - by haylem
    There are cases where you have the opportunity, as a developer, to enforce stricter security features and protections on a software, though they could very well be managed at an environmental level (ie, the operating system would take care of it). Where would you say you draw the line, and what elements do you factor in your decision? Concrete Examples User Management is the OS's responsibility Not exactly meant as a security feature, but in a similar case Google Chrome used to not allow separate profiles. The invoked reason (though it now supports multiple profiles for a same OS user) used to be that user management was the operating system's responsibility. Disabling Web-Form Fields A recurrent request I see addressed online is to have auto-completion be disabled on form fields. Auto-completion didn't exist in old browsers, and was a welcome feature at the time it was introduced for people who needed to fill in forms often. But it also brought in some security concerns, and so some browsers started to implement, on top of the (obviously needed) setting in their own preference/customization panel, an autocomplete attribute for form or input fields. And this has now been introduced into the upcoming HTML5 standard. For browsers who do not listen to this attribute, strange hacks *\ are offered, like generating unique IDs and names for fields to avoid them from being suggested in future forms (which comes with another herd of issues, like polluting your local auto-fill cache and not preventing a password from being stored in it, but instead probably duplicating its occurences). In this particular case, and others, I'd argue that this is a user setting and that it's the user's desire and the user's responsibility to enable or disable auto-fill (by disabling the feature altogether). And if it is based on an internal policy and security requirement in a corporate environment, then substitute the user for the administrator in the above. I assume it could be counter-argued that the user may want to access non-critical applications (or sites) with this handy feature enabled, and critical applications with this feature disabled. But then I'd think that's what security zones are for (in some browsers), or the sign that you need a more secure (and dedicated) environment / account to use these applications. * I obviously don't deny the ingenuity of the people who were forced to find workarounds, just the necessity of said workarounds. Questions That was a tad long-winded, so I guess my questions are: Would you in general consider it to be the application's (hence, the developer's) responsiblity? Where do you draw the line, if not in the "general" case?

    Read the article

  • What Would a CyberWar Do To Your Business?

    - by Brian Dayton
    In mid-February the Bipartisan Policy Center in the United States hosted Cyber ShockWave, a simulation of how the country might respond to a catastrophic cyber event. An attack takes place, they can't isolate where it came from or who did it, simulated press reports and market impacts...and the participants in the exercise have to brief the President and advise him/her on what to do. Last week, Former Department of Homeland Security Secretary Michael Chertoff who participated in the exercise summarized his findings in Federal Computer Weekly. The article, given FCW's readership and the topic is obviously focused on the public sector and US Federal policies. However, it touches on some broader issues that impact the private sector as well--which are applicable to any government and country/region-- such as: ·         How would the US (or any) government collaborate to identify and defeat such an attack? Chertoff calls this out as a current gap. How do the public and private sector collaborate today? How would the massive and disparate collection of agencies and companies act together in a crunch? ·         What would the impact on industries and global economies be? Chertoff, and a companion article in Government Computer News, only touch briefly on the subject--focusing on the impact on capital markets. "There's no question this has a disastrous impact on the economy," said Stephen Friedman, former director of the National Economic Council under President George W. Bush who played the role of treasury secretary. "You have financial markets shut down at this point, ordinary transactions are dramatically depleted, there's no question that this has a major impact on consumer confidence." That Got Me Thinking ·         How would it impact Oracle's customers? I know they have business continuity plans--is this one of their scenarios? What if it's not? How would it impact manufacturing lines, ATM networks, customer call centers... ·         How would it impact me and the companies I rely on? The supermarket down the street, my Internet Service Provider, the service station where I bought gas last night.   I sure don't have any answers, and neither do Chertoff or the participants in the exercise. "I have to tell you that ... we are operating in a bit of unchartered territory." said Jamie Gorelick, a former deputy attorney general who played the role of attorney general in the exercise.    But it is a good thing that governments and businesses are considering this scenario and doing what they can to prevent it from happening.

    Read the article

  • Lubuntu: neither shut-down nor restart works

    - by Rantanplan
    aI have a freshly installed Lubuntu 14.04.1 (installed with forcepae option on a laptop with Pentium M processor). The only problem that I have found so far is that I cannot shut-down or restart the laptop. It always continues showing "Lubuntu" and some dots. Pressing Esc it says wait-for-state stop/waiting * Stopping rsync daemon rsync [OK] * Asking all remaining processes to terminate… [OK] * Killing all remaining processes… [fail] ModemManager [597] : <info> Caught signal, shutting down… ModemManager [597] : <info> ModemManager is shut down nm-dispatcher.action: Could not get the system bus. Make sure the message bus daemon is running! Message: Did not receive a reply. Possible causes: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. * Deactivating swap… [OK] * Will now halt The cursor remains blinking but the only way to switch it off is to hold the power-off key pressed for some seconds. I tried sudo shutdown -h now, sudo halt and sudo poweroff resulting in the same problem. I also tried to add acpi=force in GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" in /etc/default/grub and run sudo update-grub; then, using the taskbar's shot-down button lead to a direct stop of the laptop equal to holding the power-off key pressed for some seconds. Next I followed the answer http://askubuntu.com/a/202481/288322. Now, I directly receive some messages during shut-down starting wait-for-state stop/waiting * Stopping rsync daemon rsync [OK] * Asking all remaining processes to terminate… [OK] [ 240.944277] INFO: task kworker/0:2:24: block for more than 120 seconds. [ 240.944461] Tainted: G S 3.13.0-34-generic #60-Ubuntu [ 240.944623] "echo 0 > /proc/sys/kernel/hung_tasks_timeout_secs" disables this message. followed by some more similar lines and then: * Killing all remaining processes… [fail] ModemManager [576] : <info> Caught signal, shutting down… nm-dispatcher.action: Caught signal 15, shutting down... ModemManager [576] : <info> ModemManager is shut down * Deactivating swap… [OK] * Will now halt [ 600.944276] INFO: task kworker/0:2:24: block for more than 120 seconds. [ 600.944458] Tainted: G S 3.13.0-34-generic #60-Ubuntu [ 600.944619] "echo 0 > /proc/sys/kernel/hung_tasks_timeout_secs" disables this message. Then, nothing more was coming during the next 5 minutes. If you know where can I find relevant error information, I will be happy to search for them.

    Read the article

  • Using commands with ApplicationBarMenuItem and ApplicationBarButton in Windows Phone 7

    - by Laurent Bugnion
    Unfortunately, in the current version of the Windows Phone 7 Silverlight framework, it is not possible to attach any command on the ApplicationBarMenuItem and ApplicationBarButton controls. These two controls appear in the Application Bar, for example with the following markup: <phoneNavigation:PhoneApplicationPage.ApplicationBar> <shell:ApplicationBar x:Name="MainPageApplicationBar"> <shell:ApplicationBar.MenuItems> <shell:ApplicationBarMenuItem Text="Add City" /> <shell:ApplicationBarMenuItem Text="Add Country" /> </shell:ApplicationBar.MenuItems> <shell:ApplicationBar.Buttons> <shell:ApplicationBarIconButton IconUri="/Resources/appbar.feature.video.rest.png" /> <shell:ApplicationBarIconButton IconUri="/Resources/appbar.feature.settings.rest.png" /> <shell:ApplicationBarIconButton IconUri="/Resources/appbar.refresh.rest.png" /> </shell:ApplicationBar.Buttons> </shell:ApplicationBar> </phoneNavigation:PhoneApplicationPage.ApplicationBar> This code will create the following UI: Application bar, collapsed Application bar, expanded ApplicationBarItems are not, however, controls. A quick look in MSDN shows the following hierarchy for ApplicationBarMenuItem, for example: Unfortunately, this prevents all the mechanisms that are normally used to attach a Command (for example a RelayCommand) to a control. For example, the attached behavior present in the class ButtonBaseExtension (from the Silverlight 3 version of the MVVM Light toolkit) can only be attached to a DependencyObject. Similarly, Blend behaviors (such as EventToCommand from the toolkit’s Extras library) needs a FrameworkElement to work. Using code behind The alternative is to use code behind. As I said in my MIX10 talk, the MVVM police will not take your family away if you use code behind (this quote was actually suggested to me by Glenn Block); the code behind is there for a reason. In our case, invoking a command in the ViewModel requires the following code: In MainPage.xaml: <shell:ApplicationBarMenuItem Text="My Menu 1" Click="ApplicationBarMenuItemClick"/> In MainPage.xaml.cs private void ApplicationBarMenuItemClick( object sender, System.EventArgs e) { var vm = DataContext as MainViewModel; if (vm != null) { vm.MyCommand.Execute(null); } } Conclusion Resorting to code behind to bridge the gap between the View and the ViewModel is less elegant than using attached behaviors, either through an attached property or through a Blend behavior. It does, however, work fine. I don’t have any information if future changes in the Windows Phone 7 Application Bar API will make this easier. In the mean time, I would recommend using code behind instead.   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Suggestions for connecting .NET WPF GUI with Java SE Server

    - by Sam Goldberg
    BACKGROUND We are building a Java (SE) trading application which will be monitoring market data and sending trade messages based on the market data, and also on user defined configuration parameters. We are planning to provide the user with a thin client, built in .NET (WPF) for managing the parameters, controlling the server behavior, and viewing the current state of the trading. The client doesn't need real-time updates; it will instead update the view once every few seconds (or whatever interval is configured by the user). The client has about 6 different operations it needs to perform with the server, for example: CRUD with configuration parameters query subset of the data receive updates of current positions from server It is possible that most of the different operations (except for receiving data) are just different flavors of managing the configuration parameters, but it's too early in our analysis for us to be sure. To connect the client with the server, we have been considering using: SOAP Web Service RESTful service building a custom TCP/IP based API (text or xml) (least preferred - but we use this approach with other applications we have) As best as I understand, pros and cons of the different web service flavors are: SOAP pro: totally automated in .NET (and Java), modifying server side interface require no code changes in communication layer, just running refresh on Web Service reference to regenerate the classes. con: more overhead in the communication layer sending more text, etc. We're not using J2EE container so maybe doesn't work so well with J2SE REST pro: lighter weight, less data. Has good .NET and Java support. (I don't have any real experience with this, so don't know what other benefits it has.) con: client will not be automatically aware if there are any new operations or properties added (?), so communication layer needs to be updated by developer if server interface changes. con: (both approaches) Server cannot really push updates to the client at regular intervals (?) (However, we won't mind if client polls the server to get updates.) QUESTION What are your opinions on the above options or suggestions for other ways to connect the 2 parts? (Ideally, we don't want to put much work into the communication layer, because it's not the significant part of the application so the more off-the-shelf and automated the better.)

    Read the article

  • Github Organization Repositories, Issues, Multiple Developers, and Forking - Best Workflow Practices

    - by Jim Rubenstein
    A weird title, yes, but I've got a bit of ground to cover I think. We have an organization account on github with private repositories. We want to use github's native issues/pull-requests features (pull requests are basically exactly what we want as far as code reviews and feature discussions). We found the tool hub by defunkt which has a cool little feature of being able to convert an existing issue to a pull request, and automatically associate your current branch with it. I'm wondering if it is best practice to have each developer in the organization fork the organization's repository to do their feature work/bug fixes/etc. This seems like a pretty solid work flow (as, it's basically what every open source project on github does) but we want to be sure that we can track issues and pull requests from ONE source, the organization's repository. So I have a few questions: Is a fork-per-developer approach appropriate in this case? It seems like it could be a little overkill. I'm not sure that we need a fork for every developer, unless we introduce developers who don't have direct push access and need all their code reviewed. In which case, we would want to institute a policy like that, for those developers only. So, which is better? All developers in a single repository, or a fork for everyone? Does anyone have experience with the hub tool, specifically the pull-request feature? If we do a fork-per-developer (or even for less-privileged devs) will the pull-request feature of hub operate on the pull requests from the upstream master repository (the organization's repository?) or does it have different behavior? EDIT I did some testing with issues, forks, and pull requests and found that. If you create an issue on your organization's repository, then fork the repository from your organization to your own github account, do some changes, merge to your fork's master branch. When you try to run hub -i <issue #> you get an error, User is not authorized to modify the issue. So, apparently that work flow won't work.

    Read the article

  • Safely deploying changes to production servers

    - by oazabir
    When you deploy incremental changes on a production server, which is running and live all the time, you some times see error messages like “Compiler Error Message: The Type ‘XXX’ exists in both…”. Sometimes you find Application_Start event not firing although you shipped a new class, dll or web.config. Sometimes you find static variables not getting initialized and so on. There are so many weird things happen on webservers when you incrementally deploy changes to the server and the server has been up and running for several weeks. So, I came up with a full proof house keeping steps that we always do whenever we deploy some incremental change to our websites. These steps ensure that the web sites are properly recycled , cached are cleared, all the data stored at Application level is initialized. First of all you should have multiple web servers behind load balancer. This way you can take one server our of the production traffic, do your deployment and house keeping tasks like restarting IIS, and then put it back. Then you can do it for the second server and so on. This ensures there’s no outage for customer. If you can do it reasonable fast, hopefully customers won’t notice discrepancy between the servers some having new code and some having old code. You should only do this when your changes aren’t drastic. For ex, you aren’t delivering a complete revamped UI. In that case, some users hitting server1 with latest UI will suddenly get a completely different experience and then on next page refresh, they might hit server2 with old code and get a totally different experience. This works for incremental non-dramatic changes only.   During deployment you should follow these steps: Take server X out of load balancer so that it does not get any traffic. Stop all windows services on the server. Stop IIS. Delete the Temporary ASP.NET folders of all .NET versions incase you have multiple .NET versions running. You can follow this link. Deploy the changes. Flush any distributed cache you have, for ex, Velocity or Memcached. Start IIS. Start the windows services on the server. Warm up all websites by hitting major URLs on the websites. You should have some automated script to do this. You can use tinyget to hit some major URLs, especially pages that take a lot of time to compile. Read my post on keeping websites warm with zero coding. Put server X back to load balancer so that it starts receiving traffic. That’s it. It should give you a clean deployment and prevent unexpected errors. You should print these steps and hang on the desk of your deployment guys so that they never forget during deployment pressure.

    Read the article

  • ASP.NET MVC 3 Hosting :: Deploying ASP.NET MVC 3 web application to server where ASP.NET MVC 3 is not installed

    - by mbridge
    You can built sample application on ASP.NET MVC 3 for deploying it to your hosting first. To try it out first put it to web server where ASP.NET MVC 3 installed. In this posting I will tell you what files you need and where you can find them. Here are the files you need to upload to get application running on server where ASP.NET MVC 3 is not installed. Also you can deploying ASP.NET MVC 3 web application to server where ASP.NET MVC 3 is not installed like this example: you can change reference to System.Web.Helpers.dll to be the local one so it is copied to bin folder of your application. First file in this list is my web application dll and you don’t need it to get ASP.NET MVC 3 running. All other files are located at the following folder: C:\Program Files\Microsoft ASP.NET\ASP.NET Web Pages\v1.0\Assemblies\ If there are more files needed in some other scenarios then please leave me a comment here. And… don’t forget to convert the folder in IIS to application. While developing an application locally, this isn’t a problem. But when you are ready to deploy your application to a hosting provider, this might well be a problem if the hoster does not have the ASP.NET MVC assemblies installed in the GAC. Fortunately, ASP.NET MVC is still bin-deployable. If your hosting provider has ASP.NET 3.5 SP1 installed, then you’ll only need to include the MVC DLL. If your hosting provider is still on ASP.NET 3.5, then you’ll need to deploy all three. It turns out that it’s really easy to do so. Also, ASP.NET MVC runs in Medium Trust, so it should work with most hosting providers’ Medium Trust policies. It’s always possible that a hosting provider customizes their Medium Trust policy to be draconian. Deployment is easy when you know what to copy in archive for publishing your web site on ASP.NET MVC 3 or later versions. What I like to do is use the Publish feature of Visual Studio to publish to a local directory and then upload the files to my hosting provider. If your hosting provider supports FTP, you can often skip this intermediate step and publish directly to the FTP site. The first thing I do in preparation is to go to my MVC web application project and expand the References node in the project tree. Select the aforementioned three assemblies and in the Properties dialog, set Copy Local to True. Now just right click on your application and select Publish. This brings up the following Publish wizard Notice that in this example, I selected a local directory. When I hit Publish, all the files needed to deploy my app are available in the directory I chose, including the assemblies that were in the GAC. Another ASP.NET MVC 3 article: - New Features in ASP.NET MVC 3 - ASP.NET MVC 3 First Look

    Read the article

  • Get to Know a Candidate (2 of 25): Merlin Miller&ndash;American Third Position Party

    - by Brian Lanham
    DISCLAIMER: This is not a post about “Romney” or “Obama”. This is not a post for whom I am voting. Information sourced for Wikipedia. Meet Merlin Miller of American Third Position Party In addition to being American Third Position Party nominee, Miller is an independent film maker.  He is a graduate of West Point and has commanded two units in the United States Army.  After military service he worked as an industrial engineering manager for Michelin Tire.  He learned about film making by earning an MFA from the University of Southern California. Mr. Miller is on the ballot in: CO, NJ, and TN. In the 2000’s, Miller began taking an increasingly paleoconservative political stance.  He claimed that Hollywood “ surreptitiously seeks to destroy our European-American heritage and our Christian-based traditional values, and replace them with values that debase these traditional values and elevate minorities as paragons of virtue and wisdom....Today’s motion pictures, in concert with other forms of mass media entertainment, are the greatest enemies to the well-being of our progeny and the future of our country.” Miller states that he "doesn't like" interracial marriage; however, he does not support outlawing interracial marriage, either.  Miller has denied being anti-Semitic, instead claiming that he merely opposes "favoritism" granted to Jews in the film industry.  He also opposes illegal immigration and what he refers to as "wide open borders" in the United States. The American Third Position Party (A3P) is a third positionist American political party which promotes white supremacy.  It was officially launched in January 2010 (although in November 2009 it filed papers to get on a ballot in California) partially to channel the right-wing populist resentment engendered by the financial crisis of 2007–2010 and the policies of the Obama administration and defines its principal mission as representing the political interests of white Americans. The party takes a strong stand against immigration and globalization, and strongly supports an anti-interventionist foreign policy. Although the party does not support labor unions, they do strongly support the labor rights of the American working class on a platform of placing American workers first over illegal immigrant workers and banning of overseas corporate relocation of American industry and technology Third Position or Third Alternative refers to a revolutionary nationalist political position that emphasizes its opposition to both communism and capitalism. Advocates of Third Position politics typically present themselves as "beyond left and right", instead claiming to syncretize radical ideas from both ends of the political spectrum Learn more about Merlin Miller and American Third Position Party on Wikipedia.

    Read the article

  • ACORD LOMA 2010: Building Insurance Companies in the Clouds

    - by [email protected]
    Chuck Johnston, vice president of global strategy and alliances for Oracle Insurance, participated in a featured speaking session at ACORD LOMA 2010. He provides an update on his discussions with insurers at the show and after his presentation. Every year I always make a point of walking the show floor at the ACORD LOMA technology conference to visit with colleagues and competitors, and try to get a feel for which way the industry will move over the next 12 months. Insurers are looking for substance in cloud (computing), trying to mix business with pleasure (monetizing social networks), and expect differentiation through commodity (Software as a Service). The disconnect at this show is that most vendors are still struggling with creating a clear path from Facebook to customer intimacy, SaaS to core cost savings and clouds to ubiquitous presence. Vendors need to find new ways to help insurers find the real value in these potentially disruptive technologies by understanding the changes coming to the insurance business and how these new technologies impact the new insurance business. Oracle's approach to understanding the evolving insurance industry comes from a discussion with our customers in our Insurance CIO Council, where one of our customers suggested we buy an insurance company to really understand our customers. We have decided to do the next best thing and build our own model of an insurance company, Alamere Insurance, that uses the latest technologies to transform its own business. Alamere will never issue an actual policy, but it does give us a framework to consider the impacts of changes in the insurance landscape and how Oracle technology meets the challenge or needs to evolve to help our customers be successful. In preparing for my talk at the conference using Alamere as my organizing theme, I found myself reading actuarial memoranda on CSO table changes and articles on underwriting theory that really made me think about my customer's problems first and foremost, and then how Oracle technology can provide answers. As much as I prefer techno-thrillers and sci-fi novels to actuarial papers for plane reading, I got very excited about the idea of putting myself back in the customer shoes I haven't worn in a decade, and really looking at how Oracle can power the Adaptive Insurance Enterprise. Talking to customers and industry people after the session, the idea of Alamere seemed to excite people and I got a lot of suggestions as to what lines of business we should model and where we should focus first on technology uptake. One customer said to a colleague that Oracle's attempt to "share their pain" was unique among vendors. More about Alamere, and the Adaptive Insurance Enterprise next time. Chuck Johnston is vice president of global strategy and alliances for Oracle Insurance.

    Read the article

  • Spotlight on an office - Dublin!

    - by Tim Koekkoek
    In this third instalment of our monthly topic ‘Spotlight on an Office’, we visit Dublin, Ireland Oracle has 5 offices in Dublin all in the EastPoint Business Park close to Dublin City centre. In Dublin there are currently 1,000 people working for Oracle. You’ll find, among others, a large part of OracleDirect, our inside sales organization, part of our EMEA Finance organization and employees from Product and Systems Development who work on the heart of Oracle’s products. Facilities EastPoint Business Park is located next to the Irish Financial Service Centre (IFSC) and is only one train stop away from Dublin city centre. This seafront business park and nearby amenities cater for staff’s needs, which include a Sandwich Bar, a Coffee Shop and a small Convenience Store and Newsagent. Moreover there is a Physical Therapy Clinic and Beauty Salon onsite, Pilates and Boot Camp classes, weekly WeightWatcher Classes, five football / tennis courts and an outdoor chess board. When the sun is shining On sunny days comfy, colourful beanbags are spread throughout the park to relax and every Wednesday there is the Irish Village Market providing staff with a variety of delicious gourmet foods from all over the world. Friday afternoons after work are often used by Oracle employees to start the weekend socializing in The Epicenter Cafe Bar & Venue. In the office In the Oracle offices, you have an open floor design and an open door policy which makes it really easy to walk over to your colleagues or a manager to discuss your projects and keep informed with what is going on. This way you also have a great chance to bond with your colleagues. In two of the Oracle buildings there are subsidized canteens especially for Oracle employees with chefs cooking something special everyday! One of the best things about Oracle in Dublin is that it is really multinational. Currently there are more than 25 languages spoken by Oracle employees. So you will work with colleagues from all around the globe, every day, which makes it a really interesting and exciting experience. Sport & Social There is also a dedicated Sport and Social Club, Oraclub. They organize many sport and social activities. It doesn’t matter which sport is your favourite, Oraclub caters for like-minded individuals and makes sure you can play or watch your favourite sport. Furthermore, Oraclub organizes exhibition matches to get you acquainted with some other sports. Last year the Gaelic Warriors (A Wheelchair Rugby club) held an exhibition match. Oraclub also offer Oracle parties, language courses and offer discounts on many events! So whether you want to go to a Robbie Williams concert, an exhibition of Van Gogh or a match of the Irish Rugby team, Oraclub is there for everyone! There are also plenty of possibilities to get involved in volunteering. Want to know more about the current vacancies in Dublin? Check https://campus.oracle.com for all of our vacancies.

    Read the article

  • Using Oracle BPM to Extend Oracle Applications

    - by Michelle Kimihira
    Author: Srikant Subramaniam, Senior Principal Product Manager, Oracle Fusion Middleware Customers often modify applications to meet their specific business needs - varying regulatory requirements, unique business processes, product mix transition, etc. Traditional implementation practices for such modifications are typically invasive in nature and introduce risk into projects, affect time-to-market and ease of use, and ultimately increase the costs of running and maintaining the applications. Another downside of these traditional implementation practices is that they literally cast the application in stone, making it difficult for end-users to tailor their individual work environments to meet specific needs, without getting IT involved. For many businesses, however, IT lacks the capacity to support such rapid business changes. As a result, adopting innovative solutions to change the economics of customization becomes an imperative rather than a choice. Let's look at a banking process in Siebel Financial Services and Oracle Policy Automation (OPA) using Oracle Business Process Management. This approach makes modifications simple, quick to implement and easy to maintain/upgrade. The process model is based on the Loan Origination Process Accelerator, i.e., a set of ready to deploy business solutions developed by Oracle using Business Process Management (BPM) 11g, containing customizable and extensible pre-built processes to fit specific customer requirements. This use case is a branch-based loan origination process. Origination includes a number of steps ranging from accepting a loan application, applicant identity and background verification (Know Your Customer), credit assessment, risk evaluation and the eventual disbursal of funds (or rejection of the application). We use BPM to model all of these individual tasks and integrate (via web services) with: Siebel Financial Services and (simulated) backend applications: FLEXCUBE for loan management, Background Verification and Credit Rating. The process flow starts in Siebel when a customer applies for loan, switches to OPA for eligibility verification and product recommendations, before handing it off to BPM for approvals. OPA Connector for Siebel simplifies integration with Siebel’s web services framework by saving directly into Siebel the results from the self-service interview. This combination of user input and product recommendation invokes the BPM process for loan origination. At the end of the approval process, we update Siebel and the financial app to complete the loop. We use BPM Process Spaces to display role-specific data via dashboards, including the ability to track the status of a given process (flow trace). Loan Underwriters have visibility into the product mix (loan categories), status of loan applications (count of approved/rejected/pending), volume and values of loans approved per processing center, processing times, requested vs. approved amount and other relevant business metrics. Summary Oracle recommends the use of Fusion Middleware as an extensions platform for applications. This approach makes modifications simple, quick to implement and easy to maintain/upgrade applications (by moving customizations away from applications to the process layer). It is also easier to manage processes that span multiple applications by using Oracle BPM. Additional Information Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • Oracle Leader in Transportation Management

    - by John Murphy
    Oracle Named a Leader in the Transportation Management Systems Market by Leading Analyst Firm Redwood Shores, Calif. – October 15, 2012 News Facts Gartner, Inc. has placed Oracle Transportation Management in the Leaders Quadrant of its 2012 report, “Magic Quadrant for Transportation Management Systems (TMS).” (1) Gartner Magic Quadrants position vendors within a particular market segment based on their completeness of vision and ability to execute on that vision. According to the report, “Multiple subcomponents make up a comprehensive TMS across planning (for example, load consolidation, routing, mode selection and carrier selection) and execution (for example, tendering loads to carriers, shipment track and trace, and freight audit and payment).” Built on modern, flexible, Internet based architecture, Oracle Transportation Management is a global transportation and logistics operations system that allows companies to minimize cost, optimize service levels, support sustainability initiatives, and create flexible business process automation within their transportation and logistics networks. With a share of 26% of worldwide software revenue for 2011, Oracle is also number one in TMS vendor share according to Gartner’s report, “Market Trends: A Golden Opportunity in the Transportation Management System Market, 2012 – 2016.” (2) Supporting Quote “Shippers and logistics service providers face increasingly complex challenges as they try to reduce costs, secure capacity and improve overall freight efficiency,” said Derek Gittoes, vice president, logistics product strategy, Oracle. “We believe our high standing in both Gartner reports is a reflection of Oracle’s commitment to addressing these challenges by delivering the industry’s broadest and deepest transportation management platform. With a flexible and modern platform, we are able to support customers with both basic transportation needs, as well as those with highly complex logistics requirements.” Supporting Resources Magic Quadrant for Transportation Management Systems Market Trends: A Golden Opportunity in the Transportation Management System Market, 2012 – 2016 Oracle Transportation Management (1) Gartner, Inc., “Magic Quadrant for Transportation Management Systems,” by C. Dwight Klappich, August 23, 2012 (2) Gartner, Inc., “Market Trends: A Golden Opportunity in the Transportation Management System Market, 2012 – 2016,” by Chad Eschinger and C. Dwight Klappich, September 24, 2012. About Oracle Applications Over 65,000 customers worldwide rely on Oracle's complete, open and integrated enterprise applications to achieve superior results. Oracle provides a secure path for customers to benefit from the latest technology advances that improve the customer software experience and drive better business performance. Oracle Applications Unlimited is Oracle's commitment to customer choice through continuous investment and innovation in current applications offerings. Oracle's next-generation Fusion Applications build upon that commitment, and are designed to work with and evolve Oracle's Applications Unlimited offerings. Oracle's lifetime support policy helps ensure customers will continue to have a choice in upgrade paths, based on their enterprise needs. For more information on the latest Oracle Applications releases go towww.oracle.com/applications About Oracle Oracle engineers hardware and software to work together in the cloud and in your data center. For more information about Oracle (NASDAQ:ORCL), visit www.oracle.com. Trademarks Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. ###   Karen [email protected] Simon JonesBlanc & [email protected]

    Read the article

  • SQL SERVER – QUOTED_IDENTIFIER ON/OFF Explanation and Example – Question on Real World Usage

    - by Pinal Dave
    This is a follow up blog post of SQL SERVER – QUOTED_IDENTIFIER ON/OFF and ANSI_NULL ON/OFF Explanation. I wrote that blog six years ago and I had plans that I will write a follow up blog post of the same. Today, when I was going over my to-do list and I was surprised that I had an item there which was six years old and I never got to do that. In the earlier blog post I wrote about exploitation of the Quoted Identifier and ANSI Null. In this blog post we will see a quick example of Quoted Identifier. However, before we continue this blog post, let us see a refresh what both of Quoted Identifider do. QUOTED IDENTIFIER ON/OFF This option specifies the setting for use of double quotes. When this is on, double quotation mark is used as part of the SQL Server identifier (object name). This can be useful in situations in which identifiers are also SQL Server reserved words. In simple words when we have QUOTED IDENTIFIER ON, anything which is wrapped in double quotes becomes an object. E.g. -- The following will work SET QUOTED_IDENTIFIER ON GO CREATE DATABASE "Test1" GO -- The following will throw an error about Incorrect syntax near 'Test2'. SET QUOTED_IDENTIFIER OFF GO CREATE DATABASE "Test2" GO This feature is particularly helpful when we are working with reserved keywords in SQL Server. For example if you have to create a database with the name VARCHAR or INT or DATABASE you may want to put double quotes around your database name and turn on quoted identifiers to create a database with the such name. Personally, I do not think so anybody will ever create a database with the reserve keywords intentionally, as it will just lead to confusion. Here is another example to give you further clarity about how Quoted Idenifier setting works with SELECT statement. -- The following will throw an error about Invalid column name 'Column'. SET QUOTED_IDENTIFIER ON GO SELECT "Column" GO -- The following will work SET QUOTED_IDENTIFIER OFF GO SELECT "Column" GO Personally, I always use the following method to create database as it works irrespective of what is the quoted identifier’s status. It always creates objects with my desire name whenever I would like to create. CREATE DATABASE [Test3] I believe the future of the quoted identifier on or off is useful in the real world when we have script generated from another database where this setting was ON and we have to now execute the same script again in our environment again. Question to you - I personally have never used this feature as I mentioned earlier. I believe this feature is there to support the scripts which are generated in another SQL Database or generate the script for other database. Do you have a real world scenario where we need to turn on or off Quoted Identifiers. Click to Download Scripts Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Mexico leading in Business Transformation Strategies:

    - by [email protected]
    By John Burke Group Vice President Oracle Applications Business Unit     I recently completed a business tour in Mexico, and was surprised by both the economic vibrancy of the country and the thought leadership expressed by many of the customers I met.  An example of the economic vibrancy of the country: across the street from my hotel was the local Bentley dealership, Coach Store, Yves Saint Laurent and of course a Starbucks.  I only made it to Starbucks.  Both the Coach Store and YSL had a line of folks waiting to get in... As for thought leadership, there were several illustrations only on the first day. I had the opportunity to meet with a branch of the Mexican Federal Government. Their questions were not about clerical task automation, far from it! We discussed citizen on-line access to fees and services - for example looking up the duty on an international goods shipment, or tracking that my taxes have been received, or the status of my request for a certain service.  Eligibility, policies and status.  Having an integrated rules or policy automation system that would allow businesses and citizens to access accurate information and ensure the proper collection of fees and payment for 3rd party provided services.    Then in the afternoon, I met with the owner of a roofing company (note: most roofs in Mexico are flat and made of cement).  This CEO started discussing how he wanted to transform his business from a cement products company to a service company and market 5-10-15 year service contracts which would guarantee the structural integrity of the roof and of course that the roof would remain waterproof.  Although his products were guaranteed, they required an annual inspection and most home owners never schedule that inspection until it is too late and water damage has occurred.  These emergency calls reduce his margin and reduce customer satisfaction.  This lead to a discussion of business models in general and why long term differentiation can only come from service, not just for the music or news industries, but also for roofing companies!    I completely agreed with the transformational concepts described in both meetings and quickly understood why there is a Bentley dealership near my hotel.    

    Read the article

  • Development-led security vs administration-led security in a software product?

    - by haylem
    There are cases where you have the opportunity, as a developer, to enforce stricter security features and protections on a software, though they could very well be managed at an environmental level (ie, the operating system would take care of it). Where would you say you draw the line, and what elements do you factor in your decision? Concrete Examples User Management is the OS's responsibility Not exactly meant as a security feature, but in a similar case Google Chrome used to not allow separate profiles. The invoked reason (though it now supports multiple profiles for a same OS user) used to be that user management was the operating system's responsibility. Disabling Web-Form Fields A recurrent request I see addressed online is to have auto-completion be disabled on form fields. Auto-completion didn't exist in old browsers, and was a welcome feature at the time it was introduced for people who needed to fill in forms often. But it also brought in some security concerns, and so some browsers started to implement, on top of the (obviously needed) setting in their own preference/customization panel, an autocomplete attribute for form or input fields. And this has now been introduced into the upcoming HTML5 standard. For browsers that do not listen to this attribute, strange hacks* are offered, like generating unique IDs and names for fields to avoid them from being suggested in future forms (which comes with another herd of issues, like polluting your local auto-fill cache and not preventing a password from being stored in it, but instead probably duplicating its occurences). In this particular case, and others, I'd argue that this is a user setting and that it's the user's desire and the user's responsibility to enable or disable auto-fill (by disabling the feature altogether). And if it is based on an internal policy and security requirement in a corporate environment, then substitute the user for the administrator in the above. I assume it could be counter-argued that the user may want to access non-critical applications (or sites) with this handy feature enabled, and critical applications with this feature disabled. But then I'd think that's what security zones are for (in some browsers), or the sign that you need a more secure (and dedicated) environment / account to use these applications. * I obviously don't deny the ingeniosity of the people who were forced to find workarounds, just the necessity of said workarounds. Questions That was a tad long-winded, so I guess my questions are: Would you in general consider it to be the application's (hence, the developer's) responsiblity? Where do you draw the line, if not in the "general" case?

    Read the article

  • Nagging As A Strategy For Better Linking: -z guidance

    - by user9154181
    The link-editor (ld) in Solaris 11 has a new feature that we call guidance that is intended to help you build better objects. The basic idea behind guidance is that if (and only if) you request it, the link-editor will issue messages suggesting better options and other changes you might make to your ld command to get better results. You can choose to take the advice, or you can disable specific types of guidance while acting on others. In some ways, this works like an experienced friend leaning over your shoulder and giving you advice — you're free to take it or leave it as you see fit, but you get nudged to do a better job than you might have otherwise. We use guidance to build the core Solaris OS, and it has proven to be useful, both in improving our objects, and in making sure that regressions don't creep back in later. In this article, I'm going to describe the evolution in thinking and design that led to the implementation of the -z guidance option, as well as give a brief description of how it works. The guidance feature issues non-fatal warnings. However, experience shows that once developers get used to ignoring warnings, it is inevitable that real problems will be lost in the noise and ignored or missed. This is why we have a zero tolerance policy against build noise in the core Solaris OS. In order to get maximum benefit from -z guidance while maintaining this policy, I added the -z fatal-warnings option at the same time. Much of the material presented here is adapted from the arc case: PSARC 2010/312 Link-editor guidance The History Of Unfortunate Link-Editor Defaults The Solaris link-editor is one of the oldest Unix commands. It stands to reason that this would be true — in order to write an operating system, you need the ability to compile and link code. The original link-editor (ld) had defaults that made sense at the time. As new features were needed, command line option switches were added to let the user use them, while maintaining backward compatibility for those who didn't. Backward compatibility is always a concern in system design, but is particularly important in the case of the tool chain (compilers, linker, and related tools), since it is a basic building block for the entire system. Over the years, applications have grown in size and complexity. Important concepts like dynamic linking that didn't exist in the original Unix system were invented. Object file formats changed. In the case of System V Release 4 Unix derivatives like Solaris, the ELF (Extensible Linking Format) was adopted. Since then, the ELF system has evolved to provide tools needed to manage today's larger and more complex environments. Features such as lazy loading, and direct bindings have been added. In an ideal world, many of these options would be defaults, with rarely used options that allow the user to turn them off. However, the reality is exactly the reverse: For backward compatibility, these features are all options that must be explicitly turned on by the user. This has led to a situation in which most applications do not take advantage of the many improvements that have been made in linking over the last 20 years. If their code seems to link and run without issue, what motivation does a developer have to read a complex manpage, absorb the information provided, choose the features that matter for their application, and apply them? Experience shows that only the most motivated and diligent programmers will make that effort. We know that most programs would be improved if we could just get you to use the various whizzy features that we provide, but the defaults conspire against us. We have long wanted to do something to make it easier for our users to use the linkers more effectively. There have been many conversations over the years regarding this issue, and how to address it. They always break down along the following lines: Change ld Defaults Since the world would be a better place the newer ld features were the defaults, why not change things to make it so? This idea is simple, elegant, and impossible. Doing so would break a large number of existing applications, including those of ISVs, big customers, and a plethora of existing open source packages. In each case, the owner of that code may choose to follow our lead and fix their code, or they may view it as an invitation to reconsider their commitment to our platform. Backward compatibility, and our installed base of working software, is one of our greatest assets, and not something to be lightly put at risk. Breaking backward compatibility at this level of the system is likely to do more harm than good. But, it sure is tempting. New Link-Editor One might create a new linker command, not called 'ld', leaving the old command as it is. The new one could use the same code as ld, but would offer only modern options, with the proper defaults for features such as direct binding. The resulting link-editor would be a pleasure to use. However, the approach is doomed to niche status. There is a vast pile of exiting code in the world built around the existing ld command, that reaches back to the 1970's. ld use is embedded in large and unknown numbers of makefiles, and is used by name by compilers that execute it. A Unix link-editor that is not named ld will not find a majority audience no matter how good it might be. Finally, a new linker command will eventually cease to be new, and will accumulate its own burden of backward compatibility issues. An Option To Make ld Do The Right Things Automatically This line of reasoning is best summarized by a CR filed in 2005, entitled 6239804 make it easier for ld(1) to do what's best The idea is to have a '-z best' option that unchains ld from its backward compatibility commitment, and allows it to turn on the "best" set of features, as determined by the authors of ld. The specific set of features enabled by -z best would be subject to change over time, as requirements change. This idea is more realistic than the other two, but was never implemented because it has some important issues that we could never answer to our satisfaction: The -z best proposal assumes that the user can turn it on, and trust it to select good options without the user needing to be aware of the options being applied. This is a fallacy. Features such as direct bindings require the user to do some analysis to ensure that the resulting program will still operate properly. A user who is willing to do the work to verify that what -z best does will be OK for their application is capable of turning on those features directly, and therefore gains little added benefit from -z best. The intent is that when a user opts into -z best, that they understand that z best is subject to sometimes incompatible evolution. Experience teaches us that this won't work. People will use this feature, the meaning of -z best will change, code that used to build will fail, and then there will be complaints and demands to retract the change. When (not if) this occurs, we will of course defend our actions, and point at the disclaimer. We'll win some of those debates, and lose others. Ultimately, we'll end up with -z best2 (-z better), or other compromises, and our goal of simplifying the world will have failed. The -z best idea rolls up a set of features that may or may not be related to each other into a unit that must be taken wholesale, or not at all. It could be that only a subset of what it does is compatible with a given application, in which case the user is expected to abandon -z best and instead set the options that apply to their application directly. In doing so, they lose one of the benefits of -z best, that if you use it, future versions of ld may choose a different set of options, and automatically improve the object through the act of rebuilding it. I drew two conclusions from the above history: For a link-editor, backward compatibility is vital. If a given command line linked your application 10 years ago, you have every reason to expect that it will link today, assuming that the libraries you're linking against are still available and compatible with their previous interfaces. For an application of any size or complexity, there is no substitute for the work involved in examining the code and determining which linker options apply and which do not. These options are largely orthogonal to each other, and it can be reasonable not to use any or all of them, depending on the situation, even in modern applications. It is a mistake to tie them together. The idea for -z guidance came from consideration of these points. By decoupling the advice from the act of taking the advice, we can retain the good aspects of -z best while avoiding its pitfalls: -z guidance gives advice, but the decision to take that advice remains with the user who must evaluate its merit and make a decision to take it or not. As such, we are free to change the specific guidance given in future releases of ld, without breaking existing applications. The only fallout from this will be some new warnings in the build output, which can be ignored or dealt with at the user's convenience. It does not couple the various features given into a single "take it or leave it" option, meaning that there will never be a need to offer "-zguidance2", or other such variants as things change over time. Guidance has the potential to be our final word on this subject. The user is given the flexibility to disable specific categories of guidance without losing the benefit of others, including those that might be added to future versions of the system. Although -z fatal-warnings stands on its own as a useful feature, it is of particular interest in combination with -z guidance. Used together, the guidance turns from advice to hard requirement: The user must either make the suggested change, or explicitly reject the advice by specifying a guidance exception token, in order to get a build. This is valuable in environments with high coding standards. ld Command Line Options The guidance effort resulted in new link-editor options for guidance and for turning warnings into fatal errors. Before I reproduce that text here, I'd like to highlight the strategic decisions embedded in the guidance feature: In order to get guidance, you have to opt in. We hope you will opt in, and believe you'll get better objects if you do, but our default mode of operation will continue as it always has, with full backward compatibility, and without judgement. Guidance suggestions always offers specific advice, and not vague generalizations. You can disable some guidance without turning off the entire feature. When you get guidance warnings, you can choose to take the advice, or you can specify a keyword to disable guidance for just that category. This allows you to get guidance for things that are useful to you, without being bothered about things that you've already considered and dismissed. As the world changes, we will add new guidance to steer you in the right direction. All such new guidance will come with a keyword that let's you turn it off. In order to facilitate building your code on different versions of Solaris, we quietly ignore any guidance keywords we don't recognize, assuming that they are intended for newer versions of the link-editor. If you want to see what guidance tokens ld does and does not recognize on your system, you can use the ld debugging feature as follows: % ld -Dargs -z guidance=foo,nodefs debug: debug: Solaris Linkers: 5.11-1.2275 debug: debug: arg[1] option=-D: option-argument: args debug: arg[2] option=-z: option-argument: guidance=foo,nodefs debug: warning: unrecognized -z guidance item: foo The -z fatal-warning option is straightforward, and generally useful in environments with strict coding standards. Note that the GNU ld already had this feature, and we accept their option names as synonyms: -z fatal-warnings | nofatal-warnings --fatal-warnings | --no-fatal-warnings The -z fatal-warnings and the --fatal-warnings option cause the link-editor to treat warnings as fatal errors. The -z nofatal-warnings and the --no-fatal-warnings option cause the link-editor to treat warnings as non-fatal. This is the default behavior. The -z guidance option is defined as follows: -z guidance[=item1,item2,...] Provide guidance messages to suggest ld options that can improve the quality of the resulting object, or which are otherwise considered to be beneficial. The specific guidance offered is subject to change over time as the system evolves. Obsolete guidance offered by older versions of ld may be dropped in new versions. Similarly, new guidance may be added to new versions of ld. Guidance therefore always represents current best practices. It is possible to enable guidance, while preventing specific guidance messages, by providing a list of item tokens, representing the class of guidance to be suppressed. In this way, unwanted advice can be suppressed without losing the benefit of other guidance. Unrecognized item tokens are quietly ignored by ld, allowing a given ld command line to be executed on a variety of older or newer versions of Solaris. The guidance offered by the current version of ld, and the item tokens used to disable these messages, are as follows. Specify Required Dependencies Dynamic executables and shared objects should explicitly define all of the dependencies they require. Guidance recommends the use of the -z defs option, should any symbol references remain unsatisfied when building dynamic objects. This guidance can be disabled with -z guidance=nodefs. Do Not Specify Non-Required Dependencies Dynamic executables and shared objects should not define any dependencies that do not satisfy the symbol references made by the dynamic object. Guidance recommends that unused dependencies be removed. This guidance can be disabled with -z guidance=nounused. Lazy Loading Dependencies should be identified for lazy loading. Guidance recommends the use of the -z lazyload option should any dependency be processed before either a -z lazyload or -z nolazyload option is encountered. This guidance can be disabled with -z guidance=nolazyload. Direct Bindings Dependencies should be referenced with direct bindings. Guidance recommends the use of the -B direct, or -z direct options should any dependency be processed before either of these options, or the -z nodirect option is encountered. This guidance can be disabled with -z guidance=nodirect. Pure Text Segment Dynamic objects should not contain relocations to non-writable, allocable sections. Guidance recommends compiling objects with Position Independent Code (PIC) should any relocations against the text segment remain, and neither the -z textwarn or -z textoff options are encountered. This guidance can be disabled with -z guidance=notext. Mapfile Syntax All mapfiles should use the version 2 mapfile syntax. Guidance recommends the use of the version 2 syntax should any mapfiles be encountered that use the version 1 syntax. This guidance can be disabled with -z guidance=nomapfile. Library Search Path Inappropriate dependencies that are encountered by ld are quietly ignored. For example, a 32-bit dependency that is encountered when generating a 64-bit object is ignored. These dependencies can result from incorrect search path settings, such as supplying an incorrect -L option. Although benign, this dependency processing is wasteful, and might hide a build problem that should be solved. Guidance recommends the removal of any inappropriate dependencies. This guidance can be disabled with -z guidance=nolibpath. In addition, -z guidance=noall can be used to entirely disable the guidance feature. See Chapter 7, Link-Editor Quick Reference, in the Linker and Libraries Guide for more information on guidance and advice for building better objects. Example The following example demonstrates how the guidance feature is intended to work. We will build a shared object that has a variety of shortcomings: Does not specify all it's dependencies Specifies dependencies it does not use Does not use direct bindings Uses a version 1 mapfile Contains relocations to the readonly allocable text (not PIC) This scenario is sadly very common — many shared objects have one or more of these issues. % cat hello.c #include <stdio.h> #include <unistd.h> void hello(void) { printf("hello user %d\n", getpid()); } % cat mapfile.v1 # This version 1 mapfile will trigger a guidance message % cc hello.c -o hello.so -G -M mapfile.v1 -lelf As you can see, the operation completes without error, resulting in a usable object. However, turning on guidance reveals a number of things that could be better: % cc hello.c -o hello.so -G -M mapfile.v1 -lelf -zguidance ld: guidance: version 2 mapfile syntax recommended: mapfile.v1 ld: guidance: -z lazyload option recommended before first dependency ld: guidance: -B direct or -z direct option recommended before first dependency Undefined first referenced symbol in file getpid hello.o (symbol belongs to implicit dependency /lib/libc.so.1) printf hello.o (symbol belongs to implicit dependency /lib/libc.so.1) ld: warning: symbol referencing errors ld: guidance: -z defs option recommended for shared objects ld: guidance: removal of unused dependency recommended: libelf.so.1 warning: Text relocation remains referenced against symbol offset in file .rodata1 (section) 0xa hello.o getpid 0x4 hello.o printf 0xf hello.o ld: guidance: position independent (PIC) code recommended for shared objects ld: guidance: see ld(1) -z guidance for more information Given the explicit advice in the above guidance messages, it is relatively easy to modify the example to do the right things: % cat mapfile.v2 # This version 2 mapfile will not trigger a guidance message $mapfile_version 2 % cc hello.c -o hello.so -Kpic -G -Bdirect -M mapfile.v2 -lc -zguidance There are situations in which the guidance does not fit the object being built. For instance, you want to build an object without direct bindings: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance ld: guidance: -B direct or -z direct option recommended before first dependency ld: guidance: see ld(1) -z guidance for more information It is easy to disable that specific guidance warning without losing the overall benefit from allowing the remainder of the guidance feature to operate: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance=nodirect Conclusions The linking guidelines enforced by the ld guidance feature correspond rather directly to our standards for building the core Solaris OS. I'm sure that comes as no surprise. It only makes sense that we would want to build our own product as well as we know how. Solaris is usually the first significant test for any new linker feature. We now enable guidance by default for all builds, and the effect has been very positive. Guidance helps us find suboptimal objects more quickly. Programmers get concrete advice for what to change instead of vague generalities. Even in the cases where we override the guidance, the makefile rules to do so serve as documentation of the fact. Deciding to use guidance is likely to cause some up front work for most code, as it forces you to consider using new features such as direct bindings. Such investigation is worthwhile, but does not come for free. However, the guidance suggestions offer a structured and straightforward way to tackle modernizing your objects, and once that work is done, for keeping them that way. The investment is often worth it, and will replay you in terms of better performance and fewer problems. I hope that you find guidance to be as useful as we have.

    Read the article

  • Access-based Enumeration (December 04, 2009)

    - by user12612012
    Access-based Enumeration (ABE) is another recent addition to the Solaris CIFS Service - delivered into snv_124.  Designed to be compatible with Windows ABE, which was introduced in Windows Server 2003 SP1, this feature filters directory content based on the user browsing the directory.  Each user can only see the files and directories to which they have access.  This can be useful to implement an out-of-sight, out-of-mind policy or simply to reduce the number of files presented to each user - to make it easier to find files in directories containing a large number of files. ABE is managed on a per share basis by a new boolean share property called, as you might imagine, abe, which is described insharemgr(1M).  When set to true, ABE filtering is enabled on the share and directory entries to which the user has no access will be omitted from directory listings returned to the client.  When set to false or not defined, ABE filtering will not be performed on the share.  The abe property is not defined by default.Administration is straightforward, for example: # zfs sharesmb=abe=true,name=jane tank/home/jane# sharemgr show -vp    zfs       zfs/tank/home/jane nfs=() smb=()          jane=/export/home/jane     smb=(abe="true") ABE is also supported via sharemgr(1M) and on smbautohome(4) shares. Note that even though a file is visible in a share, with ABE enabled, it doesn't automatically mean that the user will always be able to open the file.  If a user has read attribute access to a file ABE will show the it but access will be denied if this user tries to open the file for reading or writing. We considered supporting ABE on NFS shares, as suggested by the name of PSARC/2009/375, but we ran into problems due to NFS client readdir caching.  NFS clients maintain a common directory entry cache for all users, which not only defeats the intent of ABE but can lead to very confusing results.  If multiple users are looking at the content of a directory with ABE enabled, the entries that get cached will depend on who looks at the directory first.  Subsequent users may see files that ABE on the server would have filtered out or files may be missing because they were filtered out for the original user. Although this issue can be resolved by disabling the NFS client readdir cache, this was deemed to be an unsuitable solution because it would create a dependency between a server share property and the configuration on all NFS clients, and there was the potential for differences in behavior across the various NFS clients.  It just seemed to add unnecessary administration complexity so we pulled it out. References for more information PSARC/2009/246 ZFS support for Access Based Enumeration PSARC/2009/375 ABE share property for NFS and SMB 6802734 Support for Access Based Enumeration 6802736 SMB share support for Access Based Enumeration Windows Access-based Enumeration

    Read the article

  • Microsoft JScript runtime error in Visual studio 2010

    - by anirudha
    There is many tool exist to debug JavaScript visual studio , firebug and some other great plugin are one of them. here I show you solution for Microsoft JScript runtime error: Object doesn't support this property or method  if you search on Google for how to debug Javascript in Visual studio. all of them told you to follow this instruction :- go to Internet option in IE > advanced tab > Browsing section > uhcheck the both option disable script debugging for IE (internet explorer) disable script debugging (others). that’s the information you read are outdate or not true these days. in visual studio you can play with JavaScript debugging even these option is check or unchecked. off-course you  can try these step in express version of visual studio. I found a little problem that my code [based on jQuery plugin] try to call some function. some of them not implement in browser so they call other so that’s fine and work in every browser. when visual studio found some function they trying to call and not implemented in browser I use to debug they tell me Microsoft JScript runtime error: Object doesn't support this property or method  they tell me again whenever I put refresh [f5] in browser. so the thing they do you never like that. see error window first whenever code is not buggy and see everytime before see the page you want to see. this behavior harsh you. there is no problem whenever you not commonly used IE but whenever you really want to debugging you got some pain too from the behavior of this. well I have some patch for that if you really like the debugging in Visual studio with IE. so if you sure that code is not buggy or you really not want to see that’s window here is trick. when you debug the JavaScript in IE choose the compact mode and they never force you to see window first who tell the thing you not want to see. How to do that for reliefs from this pain in visual studio after debug the project or website IE gone automatically launch. go to Developer tool by pressing f12 > you see window something like this:- by the way they give you document mode IE 7 as default or browser mode based on your settings. first thing is that you need to set the compact view [any ] in browser mode. and next time the error window never come again who tell you Microsoft JScript runtime error: Object doesn't support this property or method

    Read the article

  • BizTalk 2009 - Custom Functoid Categories

    - by StuartBrierley
    I recently had cause to code a number of custom functoids to aid with some maps that I was writing. Once these were developed and deployed to C:\Program Files\Microsoft BizTalk Server 2009\Developer Tools\Mapper Extensions a quick refresh allowed them to appear in toolbox.  After dropping these on a map and configuring the appropriate inputs I tested the map to check that they worked as expected.  All but one of the functoids worked as expecetd, but the final functoid appeared not to be firing at all. I had already tested the code used in a simple test harness application, so I was confident in the code used, but I still needed to figure out what the problem might be. Debugging the map helped me on the way; for some reason the functoid in question was not shown correctly - the functoid definition was wrong. After some investigations I found that the functoid type you assign when coding a custom functoid affects more than just the category it appears in; different functoid types have different capabilities, including what they can link too.  For example, a logical functoid can not provide content for an output element, it can only say whether the element exists.  Map this via a Value Mapping functoid and the value of true or false can be seen in the output element. The functoid I was having problems with was one whare I had used the XPath functoid type, this had seemed to be a good fit as I was looking up content in a config file using xpath and I wanted it to appear the advanced area.  From the table below you can see that this functoid type is marked as "Internal Only", preventing it from being used for custom functoids.  Changing my type to String allowed the functoid to function as expected. Category Description Toolbox Group Assert Internal Use Only Advanced Conversion Converts characters to and from numerics and converts numbers from one base to another. Conversion Count Internal Use Only Advanced Cumulative Performs accumulations of the value of a field that occurs multiple times in a source document and outputs a single output. Cumulative DatabaseExtract Internal Use Only Database DatabaseLookup Internal Use Only Database DateTime Adds date, time, date and time, or add days to a specified date, in output data. Date/Time ExistenceLooping Internal Use Only Advanced Index Internal Use Only Advanced Iteration Internal Use Only Advanced Keymatch Internal Use Only Advanced Logical Controls conditional behavior of other functoids to determine whether particular output data is created. Logical Looping Internal Use Only Advanced MassCopy Internal Use Only Advanced Math Performs specific numeric calculations such as addition, multiplication, and division. Mathematical NilValue Internal Use Only Advanced Scientific Performs specific scientific calculations such as logarithmic, exponential, and trigonometric functions. Scientific Scripter Internal Use Only Advanced String Manipulates data strings by using well-known string functions such as concatenation, length, find, and trim. String TableExtractor Internal Use Only Advanced TableLooping Internal Use Only Advanced Unknown Internal Use Only Advanced ValueMapping Internal Use Only Advanced XPath Internal Use Only Advanced Links http://msdn.microsoft.com/en-us/library/microsoft.biztalk.basefunctoids.functoidcategory(BTS.20).aspx http://blog.eliasen.dk/CommentView,guid,d33b686b-b059-4381-a0e7-1c56e808f7f0.aspx

    Read the article

  • Certification Notes: 70-583 Designing and Developing Windows Azure Applications

    - by BuckWoody
    It’s time for another certification, and we’ve just release the 70-583 exam on Windows Azure. I’ve blogged my “study plans” here before on other certifications, so I thought I would do the same for this one. I’ll also need to take exam 70-513 and 70-516; but I’ll post my notes on those separately. None of these are “brain dumps” or any questions from the actual tests - just the books, links and notes I have from my studies. I’ll update these references as I’m studying, so bookmark this site and watch my Twitter and Facebook posts for when I’ll update them, or just subscribe to the RSS feed. A “Green” color on the check-block means I’ve done that part so far, red means I haven’t. First, I need to refresh my memory on some basic coding, so along with the Azure-specific information I’m reading the following general programming books: Introducing Microsoft .NET (Pro-Developer): http://www.amazon.com/Introducing-Microsoft-Pro-Developer-David-Platt/dp/0735619182/ref=sr_1_1?s=books&ie=UTF8&qid=1296339237&sr=1-1 Head First C#, 2E: A Learner's Guide to Real-World Programming with Visual C# and .NET: http://www.amazon.com/Head-First-2E-Real-World-Programming/dp/1449380344/ref=sr_1_1?ie=UTF8&qid=1296339176&sr=8-1 Microsoft Visual C# 2008 Step by Step : http://www.amazon.com/Microsoft-Visual-2008-Step/dp/0735624305/ref=sr_1_1?s=books&ie=UTF8&qid=1296339208&sr=1-1 c The first place to start is at the official site for the certification. That’s here: http://www.microsoft.com/learning/en/us/Exam.aspx?ID=70-583&Locale=en-us c On that page you’ll find several resources, and the first you should follow is the “Save to my learning” so you have a place to track everything. Then click the “Related Learning Plans” link and follow the videos and read the documentation in each of those bullets. There are six areas on the learning plan that you should focus on - make sure you open the learning plan to drill into the specifics. c Designing Data Storage Architecture (18%) Books I’m Reading: Links: My Notes: c Optimizing Data Access and Messaging (17%) Books I’m Reading: Links: My Notes: c Designing the Application Architecture (19%) Books I’m Reading: Links: My Notes: c Preparing for Application and Service Deployment (15%) Books I’m Reading: Links: My Notes: c Investigating and Analyzing Applications (16%) Books I’m Reading: Links: My Notes: c Designing Integrated Solutions (15%) Books I’m Reading: Links: My Notes:

    Read the article

  • Preview Links and Images in Google Chrome

    - by Asian Angel
    Anyone who has used the CoolPreviews extension in Firefox knows how wonderful that preview window can be. Now you can get the same kind of functionality in Chrome with the ezLinkPreview extension. Note: Extension will not work on websites containing “frame buster” code (navigation to the actual URL will occur). Before Normally if you want to have a better look at a particular webpage the only option you have is to go ahead and open it in a new tab or window. But it would certainly be nice to be able to take a quick “sneak peek” before-hand… After As soon as you have finished installing the extension everything is ready to go…just refresh any pages open prior to installation and enjoy the preview goodness. When you hover your mouse near any link you will notice a small “Preview Button” appear with the letters “EZ” inside. A closer look at the “Preview Button”. Click on the “Preview Button” to open the popup window. Now you can get a very good idea of whether the page is worth visiting or not. Here is a closer look at the popup window. Notice that you can see the URL for the webpage and access a convenient set of buttons on the right side (Open to new tab, Pin to keep overlay open, and Close). You can even resize the window as desired to best suit your needs (you can actually grab any of the four corners to resize the popup window). It is also possible to open a “preview window” inside the popup window…you can see the “Preview Button” here… If you have Chrome maximized you can enjoy using a large sized “preview window”. Now that is nice! For those who may be curious you can see that ezLinkPreview works nicely with images too. Conclusion The ezLinkPreview extension provides a quick and simple way to preview links and/or images while you are browsing. If you are looking for similar functionality in Firefox then be sure to read our article on CoolPreviews here. Links Download the ezLinkPreview extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Google Image Search Quick FixSubscribe to RSS Feeds in Chrome with a Single ClickFind a Website’s Actual Location with Chrome FlagsHow to Make Google Chrome Your Default BrowserEnable Auto-Paging Goodness in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job?

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >