Search Results

Search found 23603 results on 945 pages for 'non technical manager'.

Page 287/945 | < Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >

  • Microsoft propose une version d'évaluation gratuite de Project 2010 et des vidéos de présentation de

    Mise à jour du 11/06/10 Microsoft propose une version d'évaluation gratuite de Project 2010 Et des vidéos de présentation de son outil de gestion de projet Project est certainement un des produits les plus méconnus de Microsoft. Pourtant, cette application spécialisée pour la gestion de projet ne manque pas de qualités. Il peut, dans bien des cas, faire économiser un temps précieux aux développeurs qui souhaitent développer et non pas passer leurs journées à organiser ou à planifier des tâches pour les autres. Pour remédier à ce relatif anonymat de Project, Microsoft a décidé de proposer

    Read the article

  • Connect to running web role on Azure using Remote Desktop Connection and VS2012

    - by Magnus Karlsson
    We want to be able to collect IntelliTrace information from our running app and also use remote desktop to connect to the IIS and look around(probably debugging). 1. Create certificate 1.1 Right-click the cloud project (marked in red) and select “Configure remote desktop”. 1.2 In the drop down list of certificates, choose <create> at the bottom. 1.3. Follow the instructions, you can set it up with default values. 1.4 When done. Choose the certificate and click “Copy to File…” as seen in the left of the picture above. 1.5. Save the file with any name you want. Now we will save it to local storage to be able to import it to our solution through the azure configuration manager in step 3. 2. Save certificate to local storage Now we need to attach it to our local certificate storage to be able to reach it from our confiuguration manager in visual studio. Microsoft provides the following steps for doing this: http://support.microsoft.com/kb/232137 In order to view the Certificates store on the local computer, perform the following steps: Click Start, and then click Run. Type "MMC.EXE" (without the quotation marks) and click OK. Click Console in the new MMC you created, and then click Add/Remove Snap-in. In the new window, click Add. Highlight the Certificates snap-in, and then click Add. Choose the Computer option and click Next. Select Local Computer on the next screen, and then click OK. Click Close , and then click OK. You have now added the Certificates snap-in, which will allow you to work with any certificates in your computer's certificate store. You may want to save this MMC for later use. Now that you have access to the Certificates snap-in, you can import the server certificate into you computer's certificate store by following these steps: Open the Certificates (Local Computer) snap-in and navigate to Personal, and then Certificates. Note: Certificates may not be listed. If it is not, that is because there are no certificates installed. Right-click Certificates (or Personal if that option does not exist.) Choose All Tasks, and then click Import. When the wizard starts, click Next. Browse to the PFX file you created containing your server certificate and private key. Click Next. Enter the password you gave the PFX file when you created it. Be sure the Mark the key as exportable option is selected if you want to be able to export the key pair again from this computer. As an added security measure, you may want to leave this option unchecked to ensure that no one can make a backup of your private key. Click Next, and then choose the Certificate Store you want to save the certificate to. You should select Personal because it is a Web server certificate. If you included the certificates in the certification hierarchy, it will also be added to this store. Click Next. You should see a summary of screen showing what the wizard is about to do. If this information is correct, click Finish. You will now see the server certificate for your Web server in the list of Personal Certificates. It will be denoted by the common name of the server (found in the subject section of the certificate). Now that you have the certificate backup imported into the certificate store, you can enable Internet Information Services 5.0 to use that certificate (and the corresponding private key). To do this, perform the following steps: Open the Internet Services Manager (under Administrative Tools) and navigate to the Web site you want to enable secure communications (SSL/TLS) on. Right-click on the site and click Properties. You should now see the properties screen for the Web site. Click the Directory Security tab. Under the Secure Communications section, click Server Certificate. This will start the Web Site Certificate Wizard. Click Next. Choose the Assign an existing certificate option and click Next. You will now see a screen showing that contents of your computer's personal certificate store. Highlight your Web server certificate (denoted by the common name), and then click Next. You will now see a summary screen showing you all the details about the certificate you are installing. Be sure that this information is correct or you may have problems using SSL or TLS in HTTP communications. Click Next, and then click OK to exit the wizard. You should now have an SSL/TLS-enabled Web server. Be sure to protect your PFX files from any unwanted personnel. Image of a typical MMC.EXE with the certificates up.   3. Import the certificate to you visual studio project. 3.1 Now right click your equivalent to the MvcWebRole1 (as seen in the first picture under the red oval) and choose properties. 3.2 Choose Certificates. Right click the ellipsis to the right of the “thumbprint” and you should be able to select your newly created certificate here. After selecting it- save the file.   4. Upload the certificate to your Azure subscription. 4.1 Go to the azure management portal, click the services menu icon to the left and choose the service. Click Upload in the bottom menu.     5. Connect to server. Since I tried to use account settings(have to use another name) we have to set up a new name for the connection. No biggie. 5.1 Go to azure management portal, select your service and in the bottom menu, choose “REMOTE”. This will display the configuration for remote connection. It will actually change your ServiceConfiguration.cscfg file. After you change It here it might be good to choose download and replace the one in your project. Set a name that is not your windows azure account name and not Administrator. 5.2 Goto visual studio, click Server Explorer. Choose as selected in the picture below and click “COnnect using remote desktop”.   5.2 You will now be able to log in with the name and password set up in step 5.1. and voila! Windows server 2012, IIS and other nice stuff!   To do this one I’ve been using http://msdn.microsoft.com/en-us/library/windowsazure/ff683671.aspx where you can collect some of this information and additional one.

    Read the article

  • WCF Data Service Pipeline

    - by Daniel Cazzulino
    For documentation purposes, I just draw the following UML sequence diagrams for the “Astoria” pipeline, using Visual Studio 2010 Ultimate: For a single-entity (or non-batched) request, this is the sequence: For a batch request, this is the sequence instead: DataService component is your own DataService<T>-derived class, and DataService.ProcessingPipeline refers to its ProcessingPipeline property pipeline events.   /kzu

    Read the article

  • Giving a Zone "More Power"

    - by Brian Leonard
    In addition to the traditional virtualization benefits that Solaris zones offer, applications running in zones are also running in a more secure environment. One way to quantify this is compare the privileges available to the global zone with those of a local zone. For example, there a 82 distinct privileges available to the global zone: bleonard@solaris:~$ ppriv -l | wc -l 82 You can view the descriptions for each of those privileges as follows: bleonard@solaris:~$ ppriv -lv contract_event Allows a process to request critical events without limitation. Allows a process to request reliable delivery of all events on any event queue. contract_identity Allows a process to set the service FMRI value of a process contract template. ... Or for just one or more privileges: bleonard@solaris:~$ ppriv -lv file_dac_read file_dac_write file_dac_read Allows a process to read a file or directory whose permission bits or ACL do not allow the process read permission. file_dac_write Allows a process to write a file or directory whose permission bits or ACL do not allow the process write permission. In order to write files owned by uid 0 in the absence of an effective uid of 0 ALL privileges are required. However, in a non-global zone, only 43 of the 83 privileges are available by default: root@myzone:~# ppriv -l zone | wc -l 43 The missing privileges are: cpc_cpu dtrace_kernel dtrace_proc dtrace_user file_downgrade_sl file_flag_set file_upgrade_sl graphics_access graphics_map net_mac_implicit proc_clock_highres proc_priocntl proc_zone sys_config sys_devices sys_ipc_config sys_linkdir sys_dl_config sys_net_config sys_res_bind sys_res_config sys_smb sys_suser_compat sys_time sys_trans_label virt_manage win_colormap win_config win_dac_read win_dac_write win_devices win_dga win_downgrade_sl win_fontpath win_mac_read win_mac_write win_selection win_upgrade_sl xvm_control However, just like Tim Taylor, it is possible to give your zones more power. For example, a zone by default doesn't have the privileges to support DTrace: root@myzone:~# dtrace -l ID PROVIDER MODULE FUNCTION NAME The DTrace privileges can be added, however, as follows: bleonard@solaris:~$ sudo zonecfg -z myzone Password: zonecfg:myzone> set limitpriv="default,dtrace_proc,dtrace_user" zonecfg:myzone> verify zonecfg:myzone> exit bleonard@solaris:~$ sudo zoneadm -z myzone reboot Now I can run DTrace from within the zone: root@myzone:~# dtrace -l | more ID PROVIDER MODULE FUNCTION NAME 1 dtrace BEGIN 2 dtrace END 3 dtrace ERROR 7115 syscall nosys entry 7116 syscall nosys return ... Note, certain privileges are never allowed to be assigned to a zone. You'll be notified on boot if you attempt to assign a prohibited privilege to a zone: bleonard@solaris:~$ sudo zoneadm -z myzone reboot privilege "dtrace_kernel" is not permitted within the zone's privilege set zoneadm: zone myzone failed to verify Here's a nice listing of all the privileges and their zone status (default, optional, prohibited): Privileges in a Non-Global Zone.

    Read the article

  • How to change file managers "files" icon to nice "nautilus shell" icon in Ubuntu 12.04?

    - by Ivan Ivanov
    I've attached Nautilus to Unity launcher in Ubuntu 12.04. In Ubuntu 12.04 the default "Files" icon for Nautilus file manager is not as pretty as a "Nautilus Shell" icon. But when I search in Dash for Nautilus it gives "Files" link to Nautilus, which has grey and quite plain file-cabinet icon. How do I change this grey file cabinet icon to the Nautilus' shell icon, which suits well to my icon theme?

    Read the article

  • Wireless acting weird ubuntu 12.04 LTS

    - by Philip Yeldhos
    I'm kinda new here, so please bear with me. My wireless driver is acting very weird. It shows my router's name, but when it is connecting (after entering the correct password), the icon on the tray is like, refreshing every once in a second, while showing the animation that it is connecting. And after a few seconds, error message come up saying that wireless network is disconnected. I installed the drive through "additional drivers". What info do you need? Somebody please help. philip@philip-HP-Mini-110-3100:~$ sudo iwconfig lo no wireless extensions. eth1 IEEE 802.11 ESSID:"" Mode:Managed Frequency:2.472 GHz Access Point: Not-Associated Bit Rate:72 Mb/s Tx-Power:24 dBm Retry min limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=5/5 Signal level=0 dBm Noise level=-96 dBm Rx invalid nwid:0 Rx invalid crypt:11 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 eth0 no wireless extensions. here's what lspci -v gave me: 02:00.0 Network controller: Broadcom Corporation BCM4313 802.11b/g/n Wireless LAN Controller (rev 01) Subsystem: Hewlett-Packard Company Device 1483 Flags: bus master, fast devsel, latency 0, IRQ 17 Memory at 52000000 (64-bit, non-prefetchable) [size=16K] Capabilities: [40] Power Management version 3 Capabilities: [58] Vendor Specific Information: Len=78 <?> Capabilities: [48] MSI: Enable- Count=1/1 Maskable- 64bit+ Capabilities: [d0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting Capabilities: [13c] Virtual Channel Capabilities: [160] Device Serial Number 00-00-82-ff-ff-3f-e0-2a Capabilities: [16c] Power Budgeting <?> Kernel driver in use: wl Kernel modules: wl, bcma, brcmsmac okay, i removed the driver additional drivers gave me. now, this is what has happened: lsmod gave me: philip@philip-HP-Mini-110-3100:~$ lsmod | grep brc brcmsmac 540875 0 mac80211 436455 1 brcmsmac brcmutil 14675 1 brcmsmac cfg80211 178679 2 brcmsmac,mac80211 crc8 12781 1 brcmsmac cordic 12487 1 brcmsmac and iwconfig gave me: philip@philip-HP-Mini-110-3100:~$ iwconfig lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=19 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off eth0 no wireless extensions. and lspci -v gave me: 02:00.0 Network controller: Broadcom Corporation BCM4313 802.11b/g/n Wireless LAN Controller (rev 01) Subsystem: Hewlett-Packard Company Device 1483 Flags: bus master, fast devsel, latency 0, IRQ 17 Memory at 52000000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: brcmsmac Kernel modules: bcma, brcmsmac

    Read the article

  • GWB | May 2010 Newsletter

    - by Staff of Geeks
    Geekswithblogs.net - May 2010 Newsletter   Was your newsletter messed up? If the first newsletter we sent was all white with links all over the place, I apologize. We haven’t sent a newsletter in quite some time and I forgot the rules of theming for email clients.   TechEd 2010 New Orleans - Who is coming? John and Jeff will be at TechEd 2010 New Orleans for a short time to pass out shirts to our bloggers. These are the standard GWB shirts you see some of our other bloggers wearing and one could be yours if you let us know you are coming and what your size is. We will announce where we will be once we arrive so you can wear your Geekswithblogs.net shirt and let everyone know where you blog. These shirts will be without URLs, if you want one of those, you will have to join the contest mentioned later in the newsletter. Deadline for response is next Wednesday? Email [email protected] to let us know you are coming!   wblo.gs Shortened Url Everyone has URL shorteners these days so we needed our own. Currently Geekswithblogs.net has implemented http://wblo.gs as a shortened URL for you blog. Instead of pushing people to geekswithblogs.net/your_url you can now use wblo.gs/your_url. We hope to published a version of the shortener that will give each post a short URL and a button via your blog to publish to Twitter. This release should be available in the early summer time period.   Always wanted one of our awesome Geekswithblogs.net T-Shirts? Starting on May 15th until July 13th, we will giving our Geekswithblogs.net t-shirts away to those members who create 30 technical posts. That is 30 posts in 60 days! This should not be a difficult task with all the new releases from Microsoft. You have Visual Studio 2010, Team Foundation 2010, SharePoint 2010, Silverlight 4.0, Office 2010, SQL Server 2008 R2, and so many others to pick from. We will be keeping an eye on the results and publishing those bloggers who successfully meet the requirements by the date. A few rules to remember. The post must be technical in nature, the post must be original content created by the blogger, and the post must be originated during the contest (no pulling content from you old blog). Each shirt will be customized with your URL on the back and you can let us know you size and address when you successfully complete the contest.   What features do you want to see? We would love to hear your feedback. It has been awhile since we have published a release of Geekswithblogs.net and we want to get your feature requests in it. Do you like a particular skin we don’t have? Do you want to see comments change? Do you want to see more Twitter integration? Whatever it is, we want to know. Submit your thoughts to [email protected] and we will start a dialog with you shortly after we receive them.   Follow Us on Twitter We want to hear from you and make sure you know what is going on with Geekswithblogs.net. Follow the staff on Twitter (@StaffOfGeeks) and feel free to let us know what you think or questions you have here.

    Read the article

  • Game window systems and internal frames

    - by 2080
    I don't know if this is a valid question, but: What kind of window manager do games use which have internal frames (Frames inside frames)? Does this differ between the programming languages (Are e.g. in Java the AWT/Swing libraries used to manage these and other graphical elements, such as buttons,or is this to restrictive (speed, graphical possibilities?)) A special example would be EVE Online, where the client can use the ingame windows like on a normal desktop.

    Read the article

  • Subaru Starts Thinking about their Path to Fusion Applications

    Brian Simmermon, VP and CIO, Subaru of America, and a member of Oracle's Fusion Strategy Council explains how Subaru is aligning their business and IT strategy to improve sales through Siebel and EBS, and is looking at implementing Fusion Technologies such as BPEL, AIA and Enterprise Manager to begin their evolutionary path to Fusion.

    Read the article

  • is RapidSSL wildcard cert supported by major browsers?

    - by Jorre
    I'm thinking of buying a wildcard SSL cert from clickSSL : http://www.clickssl.com/rapidssl/rapidsslwildcard.aspx That would be a rapidssl certificate, and I was looking into my firefox options to see if RapidSSL is in the list of recognized Authorities. My certificate manager doesn't mention RapidSSL anywhere. Am I looking for the wrong name, e.g. is rapidssl recognized by browsers under a different name? I want to be sure that this certificate is working in all major browsers (including IE6)

    Read the article

  • OpenVpn is working but no internet connection

    - by user3636476
    I'm using an OpenVpn connection in Ubuntu, it's working well but when I'm using it, my internet connection is not working. I edited my connections in network manager, I've been to the VPN tab, and edited the VPN configuration. In the IPv4 Settings tab, I clicked in the bottom right button "Routes" and I ticked "Use this connection only for resources on its network". When I'm doing this the internet access is working but the vpn is not any help please?

    Read the article

  • Nagging As A Strategy For Better Linking: -z guidance

    - by user9154181
    The link-editor (ld) in Solaris 11 has a new feature that we call guidance that is intended to help you build better objects. The basic idea behind guidance is that if (and only if) you request it, the link-editor will issue messages suggesting better options and other changes you might make to your ld command to get better results. You can choose to take the advice, or you can disable specific types of guidance while acting on others. In some ways, this works like an experienced friend leaning over your shoulder and giving you advice — you're free to take it or leave it as you see fit, but you get nudged to do a better job than you might have otherwise. We use guidance to build the core Solaris OS, and it has proven to be useful, both in improving our objects, and in making sure that regressions don't creep back in later. In this article, I'm going to describe the evolution in thinking and design that led to the implementation of the -z guidance option, as well as give a brief description of how it works. The guidance feature issues non-fatal warnings. However, experience shows that once developers get used to ignoring warnings, it is inevitable that real problems will be lost in the noise and ignored or missed. This is why we have a zero tolerance policy against build noise in the core Solaris OS. In order to get maximum benefit from -z guidance while maintaining this policy, I added the -z fatal-warnings option at the same time. Much of the material presented here is adapted from the arc case: PSARC 2010/312 Link-editor guidance The History Of Unfortunate Link-Editor Defaults The Solaris link-editor is one of the oldest Unix commands. It stands to reason that this would be true — in order to write an operating system, you need the ability to compile and link code. The original link-editor (ld) had defaults that made sense at the time. As new features were needed, command line option switches were added to let the user use them, while maintaining backward compatibility for those who didn't. Backward compatibility is always a concern in system design, but is particularly important in the case of the tool chain (compilers, linker, and related tools), since it is a basic building block for the entire system. Over the years, applications have grown in size and complexity. Important concepts like dynamic linking that didn't exist in the original Unix system were invented. Object file formats changed. In the case of System V Release 4 Unix derivatives like Solaris, the ELF (Extensible Linking Format) was adopted. Since then, the ELF system has evolved to provide tools needed to manage today's larger and more complex environments. Features such as lazy loading, and direct bindings have been added. In an ideal world, many of these options would be defaults, with rarely used options that allow the user to turn them off. However, the reality is exactly the reverse: For backward compatibility, these features are all options that must be explicitly turned on by the user. This has led to a situation in which most applications do not take advantage of the many improvements that have been made in linking over the last 20 years. If their code seems to link and run without issue, what motivation does a developer have to read a complex manpage, absorb the information provided, choose the features that matter for their application, and apply them? Experience shows that only the most motivated and diligent programmers will make that effort. We know that most programs would be improved if we could just get you to use the various whizzy features that we provide, but the defaults conspire against us. We have long wanted to do something to make it easier for our users to use the linkers more effectively. There have been many conversations over the years regarding this issue, and how to address it. They always break down along the following lines: Change ld Defaults Since the world would be a better place the newer ld features were the defaults, why not change things to make it so? This idea is simple, elegant, and impossible. Doing so would break a large number of existing applications, including those of ISVs, big customers, and a plethora of existing open source packages. In each case, the owner of that code may choose to follow our lead and fix their code, or they may view it as an invitation to reconsider their commitment to our platform. Backward compatibility, and our installed base of working software, is one of our greatest assets, and not something to be lightly put at risk. Breaking backward compatibility at this level of the system is likely to do more harm than good. But, it sure is tempting. New Link-Editor One might create a new linker command, not called 'ld', leaving the old command as it is. The new one could use the same code as ld, but would offer only modern options, with the proper defaults for features such as direct binding. The resulting link-editor would be a pleasure to use. However, the approach is doomed to niche status. There is a vast pile of exiting code in the world built around the existing ld command, that reaches back to the 1970's. ld use is embedded in large and unknown numbers of makefiles, and is used by name by compilers that execute it. A Unix link-editor that is not named ld will not find a majority audience no matter how good it might be. Finally, a new linker command will eventually cease to be new, and will accumulate its own burden of backward compatibility issues. An Option To Make ld Do The Right Things Automatically This line of reasoning is best summarized by a CR filed in 2005, entitled 6239804 make it easier for ld(1) to do what's best The idea is to have a '-z best' option that unchains ld from its backward compatibility commitment, and allows it to turn on the "best" set of features, as determined by the authors of ld. The specific set of features enabled by -z best would be subject to change over time, as requirements change. This idea is more realistic than the other two, but was never implemented because it has some important issues that we could never answer to our satisfaction: The -z best proposal assumes that the user can turn it on, and trust it to select good options without the user needing to be aware of the options being applied. This is a fallacy. Features such as direct bindings require the user to do some analysis to ensure that the resulting program will still operate properly. A user who is willing to do the work to verify that what -z best does will be OK for their application is capable of turning on those features directly, and therefore gains little added benefit from -z best. The intent is that when a user opts into -z best, that they understand that z best is subject to sometimes incompatible evolution. Experience teaches us that this won't work. People will use this feature, the meaning of -z best will change, code that used to build will fail, and then there will be complaints and demands to retract the change. When (not if) this occurs, we will of course defend our actions, and point at the disclaimer. We'll win some of those debates, and lose others. Ultimately, we'll end up with -z best2 (-z better), or other compromises, and our goal of simplifying the world will have failed. The -z best idea rolls up a set of features that may or may not be related to each other into a unit that must be taken wholesale, or not at all. It could be that only a subset of what it does is compatible with a given application, in which case the user is expected to abandon -z best and instead set the options that apply to their application directly. In doing so, they lose one of the benefits of -z best, that if you use it, future versions of ld may choose a different set of options, and automatically improve the object through the act of rebuilding it. I drew two conclusions from the above history: For a link-editor, backward compatibility is vital. If a given command line linked your application 10 years ago, you have every reason to expect that it will link today, assuming that the libraries you're linking against are still available and compatible with their previous interfaces. For an application of any size or complexity, there is no substitute for the work involved in examining the code and determining which linker options apply and which do not. These options are largely orthogonal to each other, and it can be reasonable not to use any or all of them, depending on the situation, even in modern applications. It is a mistake to tie them together. The idea for -z guidance came from consideration of these points. By decoupling the advice from the act of taking the advice, we can retain the good aspects of -z best while avoiding its pitfalls: -z guidance gives advice, but the decision to take that advice remains with the user who must evaluate its merit and make a decision to take it or not. As such, we are free to change the specific guidance given in future releases of ld, without breaking existing applications. The only fallout from this will be some new warnings in the build output, which can be ignored or dealt with at the user's convenience. It does not couple the various features given into a single "take it or leave it" option, meaning that there will never be a need to offer "-zguidance2", or other such variants as things change over time. Guidance has the potential to be our final word on this subject. The user is given the flexibility to disable specific categories of guidance without losing the benefit of others, including those that might be added to future versions of the system. Although -z fatal-warnings stands on its own as a useful feature, it is of particular interest in combination with -z guidance. Used together, the guidance turns from advice to hard requirement: The user must either make the suggested change, or explicitly reject the advice by specifying a guidance exception token, in order to get a build. This is valuable in environments with high coding standards. ld Command Line Options The guidance effort resulted in new link-editor options for guidance and for turning warnings into fatal errors. Before I reproduce that text here, I'd like to highlight the strategic decisions embedded in the guidance feature: In order to get guidance, you have to opt in. We hope you will opt in, and believe you'll get better objects if you do, but our default mode of operation will continue as it always has, with full backward compatibility, and without judgement. Guidance suggestions always offers specific advice, and not vague generalizations. You can disable some guidance without turning off the entire feature. When you get guidance warnings, you can choose to take the advice, or you can specify a keyword to disable guidance for just that category. This allows you to get guidance for things that are useful to you, without being bothered about things that you've already considered and dismissed. As the world changes, we will add new guidance to steer you in the right direction. All such new guidance will come with a keyword that let's you turn it off. In order to facilitate building your code on different versions of Solaris, we quietly ignore any guidance keywords we don't recognize, assuming that they are intended for newer versions of the link-editor. If you want to see what guidance tokens ld does and does not recognize on your system, you can use the ld debugging feature as follows: % ld -Dargs -z guidance=foo,nodefs debug: debug: Solaris Linkers: 5.11-1.2275 debug: debug: arg[1] option=-D: option-argument: args debug: arg[2] option=-z: option-argument: guidance=foo,nodefs debug: warning: unrecognized -z guidance item: foo The -z fatal-warning option is straightforward, and generally useful in environments with strict coding standards. Note that the GNU ld already had this feature, and we accept their option names as synonyms: -z fatal-warnings | nofatal-warnings --fatal-warnings | --no-fatal-warnings The -z fatal-warnings and the --fatal-warnings option cause the link-editor to treat warnings as fatal errors. The -z nofatal-warnings and the --no-fatal-warnings option cause the link-editor to treat warnings as non-fatal. This is the default behavior. The -z guidance option is defined as follows: -z guidance[=item1,item2,...] Provide guidance messages to suggest ld options that can improve the quality of the resulting object, or which are otherwise considered to be beneficial. The specific guidance offered is subject to change over time as the system evolves. Obsolete guidance offered by older versions of ld may be dropped in new versions. Similarly, new guidance may be added to new versions of ld. Guidance therefore always represents current best practices. It is possible to enable guidance, while preventing specific guidance messages, by providing a list of item tokens, representing the class of guidance to be suppressed. In this way, unwanted advice can be suppressed without losing the benefit of other guidance. Unrecognized item tokens are quietly ignored by ld, allowing a given ld command line to be executed on a variety of older or newer versions of Solaris. The guidance offered by the current version of ld, and the item tokens used to disable these messages, are as follows. Specify Required Dependencies Dynamic executables and shared objects should explicitly define all of the dependencies they require. Guidance recommends the use of the -z defs option, should any symbol references remain unsatisfied when building dynamic objects. This guidance can be disabled with -z guidance=nodefs. Do Not Specify Non-Required Dependencies Dynamic executables and shared objects should not define any dependencies that do not satisfy the symbol references made by the dynamic object. Guidance recommends that unused dependencies be removed. This guidance can be disabled with -z guidance=nounused. Lazy Loading Dependencies should be identified for lazy loading. Guidance recommends the use of the -z lazyload option should any dependency be processed before either a -z lazyload or -z nolazyload option is encountered. This guidance can be disabled with -z guidance=nolazyload. Direct Bindings Dependencies should be referenced with direct bindings. Guidance recommends the use of the -B direct, or -z direct options should any dependency be processed before either of these options, or the -z nodirect option is encountered. This guidance can be disabled with -z guidance=nodirect. Pure Text Segment Dynamic objects should not contain relocations to non-writable, allocable sections. Guidance recommends compiling objects with Position Independent Code (PIC) should any relocations against the text segment remain, and neither the -z textwarn or -z textoff options are encountered. This guidance can be disabled with -z guidance=notext. Mapfile Syntax All mapfiles should use the version 2 mapfile syntax. Guidance recommends the use of the version 2 syntax should any mapfiles be encountered that use the version 1 syntax. This guidance can be disabled with -z guidance=nomapfile. Library Search Path Inappropriate dependencies that are encountered by ld are quietly ignored. For example, a 32-bit dependency that is encountered when generating a 64-bit object is ignored. These dependencies can result from incorrect search path settings, such as supplying an incorrect -L option. Although benign, this dependency processing is wasteful, and might hide a build problem that should be solved. Guidance recommends the removal of any inappropriate dependencies. This guidance can be disabled with -z guidance=nolibpath. In addition, -z guidance=noall can be used to entirely disable the guidance feature. See Chapter 7, Link-Editor Quick Reference, in the Linker and Libraries Guide for more information on guidance and advice for building better objects. Example The following example demonstrates how the guidance feature is intended to work. We will build a shared object that has a variety of shortcomings: Does not specify all it's dependencies Specifies dependencies it does not use Does not use direct bindings Uses a version 1 mapfile Contains relocations to the readonly allocable text (not PIC) This scenario is sadly very common — many shared objects have one or more of these issues. % cat hello.c #include <stdio.h> #include <unistd.h> void hello(void) { printf("hello user %d\n", getpid()); } % cat mapfile.v1 # This version 1 mapfile will trigger a guidance message % cc hello.c -o hello.so -G -M mapfile.v1 -lelf As you can see, the operation completes without error, resulting in a usable object. However, turning on guidance reveals a number of things that could be better: % cc hello.c -o hello.so -G -M mapfile.v1 -lelf -zguidance ld: guidance: version 2 mapfile syntax recommended: mapfile.v1 ld: guidance: -z lazyload option recommended before first dependency ld: guidance: -B direct or -z direct option recommended before first dependency Undefined first referenced symbol in file getpid hello.o (symbol belongs to implicit dependency /lib/libc.so.1) printf hello.o (symbol belongs to implicit dependency /lib/libc.so.1) ld: warning: symbol referencing errors ld: guidance: -z defs option recommended for shared objects ld: guidance: removal of unused dependency recommended: libelf.so.1 warning: Text relocation remains referenced against symbol offset in file .rodata1 (section) 0xa hello.o getpid 0x4 hello.o printf 0xf hello.o ld: guidance: position independent (PIC) code recommended for shared objects ld: guidance: see ld(1) -z guidance for more information Given the explicit advice in the above guidance messages, it is relatively easy to modify the example to do the right things: % cat mapfile.v2 # This version 2 mapfile will not trigger a guidance message $mapfile_version 2 % cc hello.c -o hello.so -Kpic -G -Bdirect -M mapfile.v2 -lc -zguidance There are situations in which the guidance does not fit the object being built. For instance, you want to build an object without direct bindings: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance ld: guidance: -B direct or -z direct option recommended before first dependency ld: guidance: see ld(1) -z guidance for more information It is easy to disable that specific guidance warning without losing the overall benefit from allowing the remainder of the guidance feature to operate: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance=nodirect Conclusions The linking guidelines enforced by the ld guidance feature correspond rather directly to our standards for building the core Solaris OS. I'm sure that comes as no surprise. It only makes sense that we would want to build our own product as well as we know how. Solaris is usually the first significant test for any new linker feature. We now enable guidance by default for all builds, and the effect has been very positive. Guidance helps us find suboptimal objects more quickly. Programmers get concrete advice for what to change instead of vague generalities. Even in the cases where we override the guidance, the makefile rules to do so serve as documentation of the fact. Deciding to use guidance is likely to cause some up front work for most code, as it forces you to consider using new features such as direct bindings. Such investigation is worthwhile, but does not come for free. However, the guidance suggestions offer a structured and straightforward way to tackle modernizing your objects, and once that work is done, for keeping them that way. The investment is often worth it, and will replay you in terms of better performance and fewer problems. I hope that you find guidance to be as useful as we have.

    Read the article

  • AJI Report #15&ndash;Zac Harlan Talks About Iowa Code Camp

    - by Jeff Julian
    We sit down with Zac Harlen and talk about Iowa Code Camp, what makes up a Code Camp, and how to start your own Code Camp. Zac has been a part of the leadership team for a few years for Iowa Code Camp and is the Development Manager for JP Cycles. We also get into what it takes to speak at a Code Camp if you are interested in growing beyond the user group as a speaker. Listen to the Show Site: LinkedIn Profile Blog: Zac Harlan Twitter: @ZacHarlan

    Read the article

  • TUI - Oracle Exadata V2 implementation

    - by rene.kundersma
    With this post a video recorded by Oracle Benelux. The message is coming from the Senior ICT manager of TUI The Netherlands. I had the pleasure that Oracle Consulting The Netherlands allowed me to do the design, the implementation and actual migration of this first Dutch Exadata success story. Rene Kundersma Oracle Technology Services the Netherlands

    Read the article

  • Using an AGPL 3.0-licensed library for extra functionality in an iOS app

    - by Arnold Sakhnov
    I am building an iOS application, and I am planning to incorporate an AGPL 3.0-licensed library to it to provide some extra (non-essential) functionality. I’ve got a couple of questions regarding this: Do I understand it right that this still obliges me to publish the source of my app open? If yes, does it have to be open to the general public, or only to those who have downloaded the app? Does this allow me to distribute the app for a fee?

    Read the article

  • Tired of Downloading Your Entire Schema? How About a Partial Download?

    - by user702295
    Proceed to document 1448266.1, Demantra Non Invasive Partial Schema Export Utility Extracting Partial Schema Dumps for Analysis.  In this document you will find instructions and 2 SQL scripts.  Give it a try.  If you have a problem, feel free to post to this BLOG or the community located at: https://communities.oracle.com/portal/server.pt/community/demantra_solutions/231 Regards!   Jeff

    Read the article

  • Five things SSIS should drop

    - by jamiet
    There’s a current SQL Server meme going round entitled Five things SQL Server should drop and, whilst no-one tagged me to write anything, I couldn’t resist doing the same for SQL Server Integration Services. So, without further ado, here are five things that I think should be dropped from SSIS.Data source connectionsSeriously, does anyone use these? I know why they’re there. Someone sat in a meeting back in the early part of the last decade and said “Ooo, Reporting Services and Analysis Services have these things called Data Sources. If we used them in Integration Services then we’d have a really cool integration story.” Errr….no.Web Service TaskDitto. If you want to do anything useful against anything but the simplest of SOAP web services steer well clear of this peculiar SSIS additionActiveX Script TaskAnother task that I suspect has never seen the light of day in a SSIS package. It was billed as a way of running upgraded DTS2000 ActiveX scripts in SSIS – sounds good except for one thing. Anytime one of those scripts would try to talk to the DTS object model (which they all do – otherwise what’s the point) then they will error out. This one has always been a real head scratcher.Slow Changing Dimension wizardI suspect I may get some push back on this one but I’m mentioning it anyway. Some people like the SCD wizard; I am not one of those people! Everything that the SCD component does can easily be reproduced using other components and from a performance point of view its much more beneficial to use those alternatives.Multifile Connection ManagerImagining buying a house that came with a set of keys that didn’t open any of the doors. Sounds ridiculous right? How about a SSIS Connection Manager that doesn’t get used by any of the tasks or components. Ah, that’ll be the Multifile Connection Manager then!Comments are of course welcome. Diatribes are assumed :)@Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Nagios: Central Monitoring

    <b>Begin Linux:</b> "This is part two of a three part series on distributed monitoring. You can usehttp://newsadmin.linuxtoday.com/ passive service and host checks to allow non-central Nagios servers to collect data from a network of machines and then transfer that information to a central Nagios server."

    Read the article

  • links for 2011-02-08

    - by Bob Rhubart
    When It Comes to Data Integration, Oracle Is the Right Choice (tags: ping.fm) When It Comes to Data Integration, Oracle Is the Right Choice (tags: ping.fm) Webcast: Webcast: Deploy Oracle VM Templates for Oracle E-Business Suite and Oracle PeopleSoft Enterprise Applications. Feb 15. Event Date: 02/15/2011 9:00am PT / Noon ET. Featured Speakers: Adam Hawley (Oracle Senior Director, Product Management, Virtualization), Ivo Dujmovic (Oracle Director, Technology Integration), Greg Kelly (Oracle Product Strategy Manager - PeopleTools). (tags: oracle virtualization peoplesoft) Webcast: Managing Oracle Exadata with Oracle Enterprise Manager 11g Thursday, February 10, 2011 - 10 a.m. PT/1 p.m. ET. Ask Oracle experts questions and learn firsthand how to efficiently manage all stages of Oracle Exadata’s lifecycle, from testing to deployment. (tags: oracle exalogic enterprisemanager) Arthur Cole: Winning the Consolidated Data Center Future | ITBusinessEdge.com "According to InformationWeek, the amount of data under management is increasing by about 20 percent per year, with some organizations having to deal with 50 percent or more. That means capacity needs to double every two or three years." - Arthur Cole (tags: dataconsolidation enterprisearchitecture) Transformation of Product Management in Telecommunications for Rapid Launch of Next Generation Products (Telecommunications Architecture Corner) Raul Goycoolea's post examines "how enterprise product management enabled by PLM-based product catalogue solutions helps to launch next generation products rapidly in the context of the Telecommunication Industry." (tags: oracle otn enterprisearchitecture) Richard Veryard on Architecture: What is an EA vendor? "Even some people who insist that enterprise architecture shouldn't be thought of as merely software architecture seem to think that 'tools' only means 'software tools.'" - Richard Veryard (tags: enterprisearchitecture) MDM for Tax Authorities (Oracle Master Data Management) "Tax Authorities face a multitude of IT challenges," says David Butler. "Compounding these issues is the fact that the IT architectures in operation at most revenue and collections agencies are very complex." (tags: oracle otn MDM ITarchitecture) Bernard Golden: How Cloud Computing Changes IT Staffs | CIO.com | CIO.com "Enterprise architects become more important" tops Bernard's list of changes. (tags: cloudcomputing staffing cio enterprisearchitecture) Martijn Linssen: Social Enterprise Magic Quadrant "Revolutions usually go wrong, where evolutions usually go right." - Martijn Linssen (tags: socialcomputing enterprise2.0) Why Do IT Roles Fail? | CIO "The roles that come up most often are the ones that are not directly building or maintaining systems. These include architecture, planning, vendor management, relationship management, PMO, and security." - Marc Cecere (tags: softwarearchitecture technologyroles) We're Hiring! - Server and Desktop Virtualization Product Management (Oracle's Virtualization Blog) Adam Hawley with information on an opportunity for qualified job seekers. (tags: oracle otn employment virtualization)

    Read the article

  • Multitenant Design for SQL Azure: White Paper Available

    - by Herve Roggero
    Cloud computing is about scaling out all your application tiers, from web application to the database layer. In fact, the whole promise of Azure is to pay for just what you need. You need more IIS servers? No problemo... just spin another web server. You expect to double your storage needs for Azure Tables? No problemo; you are covered there too... just pay for your storage needs. But what about the database tier, SQL Azure? How do you add new databases easily, and transparently, so that your application simply uses more of SQL Azure if its needs to? Without changing a single line of code? And what if you need to scale back down? Welcome to the world of database scalability. There are many terms that describe database scalability, including data federation, multitenant designs, and even NoSQL depending on the technical solution you are implementing.  Because SQL Azure is a transactional database system, NoSQL is not really an option. However data federation and multitenant designs offer some very interesting scalability options that are worth considering. Data federation, a feature of SQL Azure that will be offered in the future, offers very interesting capabilities available natively on the SQL Azure platform. More to come in a few weeks... Multitenant designs on the other hand are design practices and technologies designed to help you reach flexible scalability options not available otherwise. The first incarnation of such a method was made available on CodePlex as an open source project (http://enzosqlshard.codeplex.com).  This project was an attempt to provide a sharding library for educational purposes.  All that sounds really cool... and really esoteric... almost a form of database "voodoo"... However after being on multiple Azure projects I am starting to see a real need. Customers want to be able to free themselves from the database tier, so that if they have 10 new customers tomorrow, all they need to do is add 2 more SQL Azure instances. It's that simple. How you achieve this, and suggested application design guidelines, are available in a white paper I just published.  The white paper offers two primary sections. The first section describes the business and technical problem at hand, and how to classify it according to specific design patterns. For example, I discuss compressed shards through schema separation. The second section offers a method for addressing the needs of a multitenant design using a new library, the big bother of the codeplex project mentioned previously (that I created earlier this year), complete with management interface and such. A Beta of this platform will be made available within weeks; as soon as the documentation will be ready.   I would like to ask you to drop me a quick email at [email protected] if you are going to download the white paper. It's not required, but it would help me get in touch with you for feedback.  You can download this white paper here:   http://www.bluesyntax.net/files/EnzoFramework.pdf . Thank you, and I am looking for feedback, thoughts and implementation opportunities.

    Read the article

  • Replacement of Outlooksoft (SAP BPC) with Oracle EPM at Brady Corporation

    Nigel Youell, Product Marketing Director, Enterprise Performance Management Applications at Oracle discusses with Joe Bittorf, Project Manager at Brady Corporation why and how they embarked on this major project to replace SAP BPC with Oracle's Enterprise Performance Management Solution. Joe covers the outstanding improvements they have achieved in their financial close process, how they worked with Oracle Partner Emerging Solutions and the current project to further improve their planning and budgeting processes.

    Read the article

  • How can I prevent spam on sites which I control?

    - by danlefree
    This is a general, community wiki question to address all non-specific spam prevention questions. If your question was closed as a duplicate of this question and you feel that the information provided here does not provide a sufficient answer, please open a discussion on Pro Webmasters Meta. For purposes of this question, spam will include: Any automated post Manually-posted content which includes links to spammers' sites Manually-posted content which includes instructions to visit a spammer's site

    Read the article

< Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >