Search Results

Search found 16971 results on 679 pages for 'blogs'.

Page 61/679 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Agile PLM Partners and OCS offer valuable services

    - by Shane Goodwin
    One of the amazing benefits of using Agile is the ability to tap into a broad number of Partners and Oracle Consulting Services to leverage very talented people. Many of these people respond to postings on the Agile PLM Yahoo usergroups (WRAU and IAUG) as well as in the Oracle Support Community. As you are managing your Agile PLM project, either a new implementation or an upgrade, these Partners and OCS can jumpstart your internal resources with information and experience which takes years to develop. They can also offer other capabilities which add to your Agile experience. As an example, GoEngineer recently announced a new offering to host Agile PLM databases for companies who do not want to run their own Agile PLM system. This offering provides our smaller customers with more options on how to implement Agile PLM. The Oracle Consulting Services group has two new offerings, one for Rapid Deployments of new customers and another for Process and Technical Diagnostics of existing customers in order to better leverage the PLM investment. Contact your Oracle Sales representative to learn more. In addition, there are many other partners who are helping our customers get the most from Agile PLM. Some examples are below. In the coming months, the Oracle Partner Network will be working with partners to certify them as specialists in Agile PLM. Some of our other current partners working on Agile PLM are: Kalypso J Squared Domain Systems Sierra Atlantic Minerva

    Read the article

  • EU Research for ICT - Call 7 - biggest ever at € 780 million

    - by trond-arne.undheim
    Under the Digital Agenda for Europe, the Commission has committed to maintaining the pace of a 20% yearly increase of the annual ICT R&D budget at least until 2013. The EU's flagship policy programme calls for doubling of annual public spending on ICT R&D by 2020 and to leverage an equivalent increase in private spending to achieve the goals of Europe's 2020 strategy for jobs and growth. Call 7 is one of the biggest calls ever launched for information and communications technology (ICT) research proposals under the EU's research framework programmes. It will result in project funding of € 780 million in 2011. This funding will advance research on the future internet, robotics, smart and embedded systems, photonics, ICT for energy efficiency, health and well-being in an ageing society, and more. The €780 million call for proposals is part of the biggest ever annual Work Programme under the EU's 7th Framework Programme for Research. Almost €1.2 billion has been budgeted for 2011. €220 million were made available already in July 2010 for public private partnerships focusing on ICT for smart cars, green buildings, sustainable factories and the future internet. Universities, research centres, SMEs, large companies and other organisations in Europe and beyond are eligible to apply for project funding under ICT Call 7. Proposals can be submitted until 18 January 2011, after which they will be evaluated by independent panels of experts for selection on the basis of their quality. Background: Digital Agenda: European Commission announces €780 million boost for strategic ICT research. Call text: ICT Call 7 Deadline: 18/01/2011.

    Read the article

  • Microsoft Press Weekend Deal 26/May/2012 - Microsoft® Manual of Style, 4th Edition

    - by TATWORTH
    At http://shop.oreilly.com/product/0790145305770.do?code=MSDEAL, Microsoft Press are offering the Microsoft® Manual of Style, 4th Edition as a PDF for 50% off using the MSDEAL code."Maximize the impact and precision of your message! Now in its fourth edition, the Microsoft Manual of Style provides essential guidance to content creators, journalists, technical writers, editors, and everyone else who writes about computer technology. Direct from the Editorial Style Board at Microsoft—you get a comprehensive glossary of both general technology terms and those specific to Microsoft; clear, concise usage and style guidelines with helpful examples and alternatives; guidance on grammar, tone, and voice; and best practices for writing content for the web, optimizing for accessibility, and communicating to a worldwide audience. Fully updated and optimized for ease of use, the Microsoft Manual of Style is designed to help you communicate clearly, consistently, and accurately about technical topics—across a range of audiences and media." There is a sample chapter for free download at the above link

    Read the article

  • OBIEE 11.1.1 - Disable Wrap Data Types in WebLogic Server 10.3.x

    - by Ahmed Awan
    By default, JDBC data type’s objects are wrapped with a WebLogic wrapper. This allows for features like debugging output and track connection usage to be done by the server. The wrapping can be turned off by setting this value to false. This improves performance, in some cases significantly, and allows for the application to use the native driver objects directly. Tip: How to Disable Wrapping in WLS Administration Console You can use the Administration Console to disable data type wrapping for following JDBC data sources in bifoundation_domain domain: Data Source Name bip_datasource mds-owsm EPMSystemRegistry   To disable wrapping for each JDBC data source (as stated in above table): 1.     If you have not already done so, in the Change Center of the Administration Console, click Lock & Edit. 2.     In the Domain Structure tree, expand Services, then select Data Sources. 3.     On the Summary of Data Sources page, click the data source name for example “mds-owsm”. 4.     Select the Configuration: Connection Pool tab. 5.     Scroll down and click Advanced to show the advanced connection pool options. 6.     In Wrap Data Types, deselect the checkbox to disable wrapping. 7.     Click Save. 8.     To activate these changes, in the Change Center of the Administration Console, click Activate Changes. Important Note: This change does not take effect immediately—it requires the server be restarted.

    Read the article

  • EBS 11i and R12 certified with DB 11gR2 11.2.0.1 on Windows

    - by Steven Chan
    Oracle Database 11g Release 2 (11gR2) version 11.2.0.1 is now certified with Oracle E-Business Suite 11i and 12 on the following Microsoft Windows Server (32-bit) and Windows x64 (64-bit) operating systems:Windows Server 2003 (32-bit and 64-bit) Windows Server 2003 R2 (32-bit and 64-bit)Windows Server 2008 (32-bit and 64-bit)Windows Server 2008 R2 (64-bit only)Certified EBS ReleasesOracle E-Business Suite Release 11.5.10.2Oracle E-Business Suite Release 12.0.4 or higherOracle E-Business Suite Release  12.1.1 or higher

    Read the article

  • Adventures in MVVM &ndash; My ViewModel Base

    - by Brian Genisio's House Of Bilz
    More Adventures in MVVM First, I’d like to say: THIS IS NOT A NEW MVVM FRAMEWORK. I tend to believe that MVVM support code should be specific to the system you are building and the developers working on it.  I have yet to find an MVVM framework that does everything I want it to without doing too much.  Don’t get me wrong… there are some good frameworks out there.  I just like to pick and choose things that make sense for me.  I’d also like to add that some of these features only work in WPF.  As of Silveright 4, they don’t support binding to dynamic properties, so some of the capabilities are lost. That being said, I want to share my ViewModel base class with the world.  I have had several conversations with people about the problems I have solved using this ViewModel base.  A while back, I posted an article about some experiments with a “Rails Inspired ViewModel”.  What followed from those ideas was a ViewModel base class that I take with me and use in my projects.  It has a lot of features, all designed to reduce the friction in writing view models. I have put the code out on Codeplex under the project: ViewModelSupport. Finally, this article focuses on the ViewModel and only glosses over the View and the Model.  Without all three, you don’t have MVVM.  But this base class is for the ViewModel, so that is what I am focusing on. Features: Automatic Command Plumbing Property Change Notification Strongly Typed Property Getter/Setters Dynamic Properties Default Property values Derived Properties Automatic Method Execution Command CanExecute Change Notification Design-Time Detection What about Silverlight? Automatic Command Plumbing This feature takes the plumbing out of creating commands.  The common pattern for commands in a ViewModel is to have an Execute method as well as an optional CanExecute method.  To plumb that together, you create an ICommand Property, and set it in the constructor like so: Before public class AutomaticCommandViewModel { public AutomaticCommandViewModel() { MyCommand = new DelegateCommand(Execute_MyCommand, CanExecute_MyCommand); } public void Execute_MyCommand() { // Do something } public bool CanExecute_MyCommand() { // Are we in a state to do something? return true; } public DelegateCommand MyCommand { get; private set; } } With the base class, this plumbing is automatic and the property (MyCommand of type ICommand) is created for you.  The base class uses the convention that methods be prefixed with Execute_ and CanExecute_ in order to be plumbed into commands with the property name after the prefix.  You are left to be expressive with your behavior without the plumbing.  If you are wondering how CanExecuteChanged is raised, see the later section “Command CanExecute Change Notification”. After public class AutomaticCommandViewModel : ViewModelBase { public void Execute_MyCommand() { // Do something } public bool CanExecute_MyCommand() { // Are we in a state to do something? return true; } }   Property Change Notification One thing that always kills me when implementing ViewModels is how to make properties that notify when they change (via the INotifyPropertyChanged interface).  There have been many attempts to make this more automatic.  My base class includes one option.  There are others, but I feel like this works best for me. The common pattern (without my base class) is to create a private backing store for the variable and specify a getter that returns the private field.  The setter will set the private field and fire an event that notifies the change, only if the value has changed. Before public class PropertyHelpersViewModel : INotifyPropertyChanged { private string text; public string Text { get { return text; } set { if(text != value) { text = value; RaisePropertyChanged("Text"); } } } protected void RaisePropertyChanged(string propertyName) { var handlers = PropertyChanged; if(handlers != null) handlers(this, new PropertyChangedEventArgs(propertyName)); } public event PropertyChangedEventHandler PropertyChanged; } This way of defining properties is error-prone and tedious.  Too much plumbing.  My base class eliminates much of that plumbing with the same functionality: After public class PropertyHelpersViewModel : ViewModelBase { public string Text { get { return Get<string>("Text"); } set { Set("Text", value);} } }   Strongly Typed Property Getters/Setters It turns out that we can do better than that.  We are using a strongly typed language where the use of “Magic Strings” is often frowned upon.  Lets make the names in the getters and setters strongly typed: A refinement public class PropertyHelpersViewModel : ViewModelBase { public string Text { get { return Get(() => Text); } set { Set(() => Text, value); } } }   Dynamic Properties In C# 4.0, we have the ability to program statically OR dynamically.  This base class lets us leverage the powerful dynamic capabilities in our ecosystem. (This is how the automatic commands are implemented, BTW)  By calling Set(“Foo”, 1), you have now created a dynamic property called Foo.  It can be bound against like any static property.  The opportunities are endless.  One great way to exploit this behavior is if you have a customizable view engine with templates that bind to properties defined by the user.  The base class just needs to create the dynamic properties at runtime from information in the model, and the custom template can bind even though the static properties do not exist. All dynamic properties still benefit from the notifiable capabilities that static properties do. For any nay-sayers out there that don’t like using the dynamic features of C#, just remember this: the act of binding the View to a ViewModel is dynamic already.  Why not exploit it?  Get over it :) Just declare the property dynamically public class DynamicPropertyViewModel : ViewModelBase { public DynamicPropertyViewModel() { Set("Foo", "Bar"); } } Then reference it normally <TextBlock Text="{Binding Foo}" />   Default Property Values The Get() method also allows for default properties to be set.  Don’t set them in the constructor.  Set them in the property and keep the related code together: public string Text { get { return Get(() => Text, "This is the default value"); } set { Set(() => Text, value);} }   Derived Properties This is something I blogged about a while back in more detail.  This feature came from the chaining of property notifications when one property affects the results of another, like this: Before public class DependantPropertiesViewModel : ViewModelBase { public double Score { get { return Get(() => Score); } set { Set(() => Score, value); RaisePropertyChanged("Percentage"); RaisePropertyChanged("Output"); } } public int Percentage { get { return (int)(100 * Score); } } public string Output { get { return "You scored " + Percentage + "%."; } } } The problem is: The setter for Score has to be responsible for notifying the world that Percentage and Output have also changed.  This, to me, is backwards.    It certainly violates the “Single Responsibility Principle.” I have been bitten in the rear more than once by problems created from code like this.  What we really want to do is invert the dependency.  Let the Percentage property declare that it changes when the Score Property changes. After public class DependantPropertiesViewModel : ViewModelBase { public double Score { get { return Get(() => Score); } set { Set(() => Score, value); } } [DependsUpon("Score")] public int Percentage { get { return (int)(100 * Score); } } [DependsUpon("Percentage")] public string Output { get { return "You scored " + Percentage + "%."; } } }   Automatic Method Execution This one is extremely similar to the previous, but it deals with method execution as opposed to property.  When you want to execute a method triggered by property changes, let the method declare the dependency instead of the other way around. Before public class DependantMethodsViewModel : ViewModelBase { public double Score { get { return Get(() => Score); } set { Set(() => Score, value); WhenScoreChanges(); } } public void WhenScoreChanges() { // Handle this case } } After public class DependantMethodsViewModel : ViewModelBase { public double Score { get { return Get(() => Score); } set { Set(() => Score, value); } } [DependsUpon("Score")] public void WhenScoreChanges() { // Handle this case } }   Command CanExecute Change Notification Back to Commands.  One of the responsibilities of commands that implement ICommand – it must fire an event declaring that CanExecute() needs to be re-evaluated.  I wanted to wait until we got past a few concepts before explaining this behavior.  You can use the same mechanism here to fire off the change.  In the CanExecute_ method, declare the property that it depends upon.  When that property changes, the command will fire a CanExecuteChanged event, telling the View to re-evaluate the state of the command.  The View will make appropriate adjustments, like disabling the button. DependsUpon works on CanExecute methods as well public class CanExecuteViewModel : ViewModelBase { public void Execute_MakeLower() { Output = Input.ToLower(); } [DependsUpon("Input")] public bool CanExecute_MakeLower() { return !string.IsNullOrWhiteSpace(Input); } public string Input { get { return Get(() => Input); } set { Set(() => Input, value);} } public string Output { get { return Get(() => Output); } set { Set(() => Output, value); } } }   Design-Time Detection If you want to add design-time data to your ViewModel, the base class has a property that lets you ask if you are in the designer.  You can then set some default values that let your designer see what things might look like in runtime. Use the IsInDesignMode property public DependantPropertiesViewModel() { if(IsInDesignMode) { Score = .5; } }   What About Silverlight? Some of the features in this base class only work in WPF.  As of version 4, Silverlight does not support binding to dynamic properties.  This, in my opinion, is a HUGE limitation.  Not only does it keep you from using many of the features in this ViewModel, it also keeps you from binding to ViewModels designed in IronRuby.  Does this mean that the base class will not work in Silverlight?  No.  Many of the features outlined in this article WILL work.  All of the property abstractions are functional, as long as you refer to them statically in the View.  This, of course, means that the automatic command hook-up doesn’t work in Silverlight.  You need to plumb it to a static property in order for the Silverlight View to bind to it.  Can I has a dynamic property in SL5?     Good to go? So, that concludes the feature explanation of my ViewModel base class.  Feel free to take it, fork it, whatever.  It is hosted on CodePlex.  When I find other useful additions, I will add them to the public repository.  I use this base class every day.  It is mature, and well tested.  If, however, you find any problems with it, please let me know!  Also, feel free to suggest patches to me via the CodePlex site.  :)

    Read the article

  • I’m a Phoenix… and I’m miffed

    - by Stan Spotts
    For personal reasons, almost 30 years ago I left school to enter the workforce. I decided late 2008 to go back to school and finish my degree. After the expected loss of credits for a transfer, from Temple University to University of Phoenix, I'm now about 75% done. The experience has been interesting. Classes are time compressed, only 5 weeks each. Because I have a family and a full time job, I'm taking one at a time. Even so, I've written more papers in these classes than I ever wrote at Temple. My own papers are one thing, but the team papers give me heartburn since I can't completely control what goes into them. Not a big deal except that they make up 30% of our grade. In any case, most of the class facilitators have been great. I had great ones for Accounting, Finance, and frankly most others. I've had a few (4, maybe) cases where I was less than 2 points from an A, and asked the facilitator if I could get any of my work reviewed to see if I could get those extra points. I figured it was worth a shot, and there were no extenuating circumstances to help make my case. I think that only one facilitator decided after a review of one paper that my interpretation was good, just not what he expected, and gave me another point, which gave me an A. So while none are pushovers, they've all been open to discussion, which is as much as I should expect. Overall, good experience. That is, until my last class. On the second week, the day I was due to hand in my personal assignment for the week, I was in an accident. An SUV creamed my little Ford Focus, and totaled it (estimated repair over $11K). I was pretty banged up, especially my left shoulder. I was scheduled for rotator cuff surgery for two weeks later, and getting hit against the door really made it worse. After dealing with the police, the EMT, the tow truck, and the Percocet and Flexeril for the pain, I crashed for the night and didn't get to upload my paper until the next day. The instructor took 30% off for it being late, even after I supplied photos of the car, my arm (huge bruises), and offered to supply the police report number. I figured I'd be okay since that's 2.7 points, and I could lose up to 5 before jeopardizing an A grade. Well, that wasn't the case as we lost more points than I expected on our team paper in Week 5. I ended up with a 94.3. Yes, 7/10 of a point from an A. Of course I asked the instructor to review the issue with the accident and give me just the 0.7 points I needed for the A. That got me a short response of "I have received your emails and review your work over the last five weeks. Your current grade will stand. If you would like to dispute your grade then please feel free to contact your academic advisor. I wish you much success in your professional and academic career." Brrrr….! So I asked my academic advisor to file a dispute. If it wasn't that a pretty bad car accident was the cause, I wouldn't have. Without the grade reduction, I would have had a 97 for the class, so I'll argue that I was performing at the A level throughout the class. Why her purported "review" of my work didn't then warrant such a minor adjustment, I don't know. An A- drops my GPA, and this ticked me off. Now I have to wait and see what the school says about the grade dispute.

    Read the article

  • Elfsign Object Signing on Solaris

    - by danx
    Elfsign Object Signing on Solaris Don't let this happen to you—use elfsign! Solaris elfsign(1) is a command that signs and verifies ELF format executables. That includes not just executable programs (such as ls or cp), but other ELF format files including libraries (such as libnvpair.so) and kernel modules (such as autofs). Elfsign has been available since Solaris 10 and ELF format files distributed with Solaris, since Solaris 10, are signed by either Sun Microsystems or its successor, Oracle Corporation. When an ELF file is signed, elfsign adds a new section the ELF file, .SUNW_signature, that contains a RSA public key signature and other information about the signer. That is, the algorithm used, algorithm OID, signer CN/OU, and time stamp. The signature section can later be verified by elfsign or other software by matching the signature in the file agains the ELF file contents (excluding the signature). ELF executable files may also be signed by a 3rd-party or by the customer. This is useful for verifying the origin and authenticity of executable files installed on a system. The 3rd-party or customer public key certificate should be installed in /etc/certs/ to allow verification by elfsign. For currently-released versions of Solaris, only cryptographic framework plugin libraries are verified by Solaris. However, all ELF files may be verified by the elfsign command at any time. Elfsign Algorithms Elfsign signatures are created by taking a digest of the ELF section contents, then signing the digest with RSA. To verify, one takes a digest of ELF file and compares with the expected digest that's computed from the signature and RSA public key. Originally elfsign took a MD5 digest of a SHA-1 digest of the ELF file sections, then signed the resulting digest with RSA. In Solaris 11.1 then Solaris 11.1 SRU 7 (5/2013), the elfsign crypto algorithms available have been expanded to keep up with evolving cryptography. The following table shows the available elfsign algorithms: Elfsign Algorithm Solaris Release Comments elfsign sign -F rsa_md5_sha1   S10, S11.0, S11.1 Default for S10. Not recommended* elfsign sign -F rsa_sha1 S11.1 Default for S11.1. Not recommended elfsign sign -F rsa_sha256 S11.1 patch SRU7+   Recommended ___ *Most or all CAs do not accept MD5 CSRs and do not issue MD5 certs due to MD5 hash collision problems. RSA Key Length. I recommend using RSA-2048 key length with elfsign is RSA-2048 as the best balance between a long expected "life time", interoperability, and performance. RSA-2048 keys have an expected lifetime through 2030 (and probably beyond). For details, see Recommendation for Key Management: Part 1: General, NIST Publication SP 800-57 part 1 (rev. 3, 7/2012, PDF), tables 2 and 4 (pp. 64, 67). Step 1: create or obtain a key and cert The first step in using elfsign is to obtain a key and cert from a public Certificate Authority (CA), or create your own self-signed key and cert. I'll briefly explain both methods. Obtaining a Certificate from a CA To obtain a cert from a CA, such as Verisign, Thawte, or Go Daddy (to name a few random examples), you create a private key and a Certificate Signing Request (CSR) file and send it to the CA, following the instructions of the CA on their website. They send back a signed public key certificate. The public key cert, along with the private key you created is used by elfsign to sign an ELF file. The public key cert is distributed with the software and is used by elfsign to verify elfsign signatures in ELF files. You need to request a RSA "Class 3 public key certificate", which is used for servers and software signing. Elfsign uses RSA and we recommend RSA-2048 keys. The private key and CSR can be generated with openssl(1) or pktool(1) on Solaris. Here's a simple example that uses pktool to generate a private RSA_2048 key and a CSR for sending to a CA: $ pktool gencsr keystore=file format=pem outcsr=MYCSR.p10 \ subject="CN=canineswworks.com,OU=Canine SW object signing" \ outkey=MYPRIVATEKEY.key $ openssl rsa -noout -text -in MYPRIVATEKEY.key Private-Key: (2048 bit) modulus: 00:d2:ef:42:f2:0b:8c:96:9f:45:32:fc:fe:54:94: . . . [omitted for brevity] . . . c9:c7 publicExponent: 65537 (0x10001) privateExponent: 26:14:fc:49:26:bc:a3:14:ee:31:5e:6b:ac:69:83: . . . [omitted for brevity] . . . 81 prime1: 00:f6:b7:52:73:bc:26:57:26:c8:11:eb:6c:dc:cb: . . . [omitted for brevity] . . . bc:91:d0:40:d6:9d:ac:b5:69 prime2: 00:da:df:3f:56:b2:18:46:e1:89:5b:6c:f1:1a:41: . . . [omitted for brevity] . . . f3:b7:48:de:c3:d9:ce:af:af exponent1: 00:b9:a2:00:11:02:ed:9a:3f:9c:e4:16:ce:c7:67: . . . [omitted for brevity] . . . 55:50:25:70:d3:ca:b9:ab:99 exponent2: 00:c8:fc:f5:57:11:98:85:8e:9a:ea:1f:f2:8f:df: . . . [omitted for brevity] . . . 23:57:0e:4d:b2:a0:12:d2:f5 coefficient: 2f:60:21:cd:dc:52:76:67:1a:d8:75:3e:7f:b0:64: . . . [omitted for brevity] . . . 06:94:56:d8:9d:5c:8e:9b $ openssl req -noout -text -in MYCSR.p10 Certificate Request: Data: Version: 2 (0x2) Subject: OU=Canine SW object signing, CN=canineswworks.com Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: 00:d2:ef:42:f2:0b:8c:96:9f:45:32:fc:fe:54:94: . . . [omitted for brevity] . . . c9:c7 Exponent: 65537 (0x10001) Attributes: Signature Algorithm: sha1WithRSAEncryption b3:e8:30:5b:88:37:68:1c:26:6b:45:af:5e:de:ea:60:87:ea: . . . [omitted for brevity] . . . 06:f9:ed:b4 Secure storage of RSA private key. The private key needs to be protected if the key signing is used for production (as opposed to just testing). That is, protect the key to protect against unauthorized signatures by others. One method is to use a PIN-protected PKCS#11 keystore. The private key you generate should be stored in a secure manner, such as in a PKCS#11 keystore using pktool(1). Otherwise others can sign your signature. Other secure key storage mechanisms include a SCA-6000 crypto card, a USB thumb drive stored in a locked area, a dedicated server with restricted access, Oracle Key Manager (OKM), or some combination of these. I also recommend secure backup of the private key. Here's an example of generating a private key protected in the PKCS#11 keystore, and a CSR. $ pktool setpin # use if PIN not set yet Enter token passphrase: changeme Create new passphrase: Re-enter new passphrase: Passphrase changed. $ pktool gencsr keystore=pkcs11 label=MYPRIVATEKEY \ format=pem outcsr=MYCSR.p10 \ subject="CN=canineswworks.com,OU=Canine SW object signing" $ pktool list keystore=pkcs11 Enter PIN for Sun Software PKCS#11 softtoken: Found 1 asymmetric public keys. Key #1 - RSA public key: MYPRIVATEKEY Here's another example that uses openssl instead of pktool to generate a private key and CSR: $ openssl genrsa -out cert.key 2048 $ openssl req -new -key cert.key -out MYCSR.p10 Self-Signed Cert You can use openssl or pktool to create a private key and a self-signed public key certificate. A self-signed cert is useful for development, testing, and internal use. The private key created should be stored in a secure manner, as mentioned above. The following example creates a private key, MYSELFSIGNED.key, and a public key cert, MYSELFSIGNED.pem, using pktool and displays the contents with the openssl command. $ pktool gencert keystore=file format=pem serial=0xD06F00D lifetime=20-year \ keytype=rsa hash=sha256 outcert=MYSELFSIGNED.pem outkey=MYSELFSIGNED.key \ subject="O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com" $ pktool list keystore=file objtype=cert infile=MYSELFSIGNED.pem Found 1 certificates. 1. (X.509 certificate) Filename: MYSELFSIGNED.pem ID: c8:24:59:08:2b:ae:6e:5c:bc:26:bd:ef:0a:9c:54:de:dd:0f:60:46 Subject: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com Issuer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com Not Before: Oct 17 23:18:00 2013 GMT Not After: Oct 12 23:18:00 2033 GMT Serial: 0xD06F00D0 Signature Algorithm: sha256WithRSAEncryption $ openssl x509 -noout -text -in MYSELFSIGNED.pem Certificate: Data: Version: 3 (0x2) Serial Number: 3496935632 (0xd06f00d0) Signature Algorithm: sha256WithRSAEncryption Issuer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com Validity Not Before: Oct 17 23:18:00 2013 GMT Not After : Oct 12 23:18:00 2033 GMT Subject: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: 00:bb:e8:11:21:d9:4b:88:53:8b:6c:5a:7a:38:8b: . . . [omitted for brevity] . . . bf:77 Exponent: 65537 (0x10001) Signature Algorithm: sha256WithRSAEncryption 9e:39:fe:c8:44:5c:87:2c:8f:f4:24:f6:0c:9a:2f:64:84:d1: . . . [omitted for brevity] . . . 5f:78:8e:e8 $ openssl rsa -noout -text -in MYSELFSIGNED.key Private-Key: (2048 bit) modulus: 00:bb:e8:11:21:d9:4b:88:53:8b:6c:5a:7a:38:8b: . . . [omitted for brevity] . . . bf:77 publicExponent: 65537 (0x10001) privateExponent: 0a:06:0f:23:e7:1b:88:62:2c:85:d3:2d:c1:e6:6e: . . . [omitted for brevity] . . . 9c:e1:e0:0a:52:77:29:4a:75:aa:02:d8:af:53:24: c1 prime1: 00:ea:12:02:bb:5a:0f:5a:d8:a9:95:b2:ba:30:15: . . . [omitted for brevity] . . . 5b:ca:9c:7c:19:48:77:1e:5d prime2: 00:cd:82:da:84:71:1d:18:52:cb:c6:4d:74:14:be: . . . [omitted for brevity] . . . 5f:db:d5:5e:47:89:a7:ef:e3 exponent1: 32:37:62:f6:a6:bf:9c:91:d6:f0:12:c3:f7:04:e9: . . . [omitted for brevity] . . . 97:3e:33:31:89:66:64:d1 exponent2: 00:88:a2:e8:90:47:f8:75:34:8f:41:50:3b:ce:93: . . . [omitted for brevity] . . . ff:74:d4:be:f3:47:45:bd:cb coefficient: 4d:7c:09:4c:34:73:c4:26:f0:58:f5:e1:45:3c:af: . . . [omitted for brevity] . . . af:01:5f:af:ad:6a:09:bf Step 2: Sign the ELF File object By now you should have your private key, and obtained, by hook or crook, a cert (either from a CA or use one you created (a self-signed cert). The next step is to sign one or more objects with your private key and cert. Here's a simple example that creates an object file, signs, verifies, and lists the contents of the ELF signature. $ echo '#include <stdio.h>\nint main(){printf("Hello\\n");}'>hello.c $ make hello cc -o hello hello.c $ elfsign verify -v -c MYSELFSIGNED.pem -e hello elfsign: no signature found in hello. $ elfsign sign -F rsa_sha256 -v -k MYSELFSIGNED.key -c MYSELFSIGNED.pem -e hello elfsign: hello signed successfully. format: rsa_sha256. signer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com. signed on: October 17, 2013 04:22:49 PM PDT. $ elfsign list -f format -e hello rsa_sha256 $ elfsign list -f signer -e hello O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com $ elfsign list -f time -e hello October 17, 2013 04:22:49 PM PDT $ elfsign verify -v -c MYSELFSIGNED.key -e hello elfsign: verification of hello failed. format: rsa_sha256. signer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com. signed on: October 17, 2013 04:22:49 PM PDT. Signing using the pkcs11 keystore To sign the ELF file using a private key in the secure pkcs11 keystore, replace "-K MYSELFSIGNED.key" in the "elfsign sign" command line with "-T MYPRIVATEKEY", where MYPRIVATKEY is the pkcs11 token label. Step 3: Install the cert and test on another system Just signing the object isn't enough. You need to copy or install the cert and the signed ELF file(s) on another system to test that the signature is OK. Your public key cert should be installed in /etc/certs. Use elfsign verify to verify the signature. Elfsign verify checks each cert in /etc/certs until it finds one that matches the elfsign signature in the file. If one isn't found, the verification fails. Here's an example: $ su Password: # rm /etc/certs/MYSELFSIGNED.key # cp MYSELFSIGNED.pem /etc/certs # exit $ elfsign verify -v hello elfsign: verification of hello passed. format: rsa_sha256. signer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com. signed on: October 17, 2013 04:24:20 PM PDT. After testing, package your cert along with your ELF object to allow elfsign verification after your cert and object are installed or copied. Under the Hood: elfsign verification Here's the steps taken to verify a ELF file signed with elfsign. The steps to sign the file are similar except the private key exponent is used instead of the public key exponent and the .SUNW_signature section is written to the ELF file instead of being read from the file. Generate a digest (SHA-256) of the ELF file sections. This digest uses all ELF sections loaded in memory, but excludes the ELF header, the .SUNW_signature section, and the symbol table Extract the RSA signature (RSA-2048) from the .SUNW_signature section Extract the RSA public key modulus and public key exponent (65537) from the public key cert Calculate the expected digest as follows:     signaturepublicKeyExponent % publicKeyModulus Strip the PKCS#1 padding (most significant bytes) from the above. The padding is 0x00, 0x01, 0xff, 0xff, . . ., 0xff, 0x00. If the actual digest == expected digest, the ELF file is verified (OK). Further Information elfsign(1), pktool(1), and openssl(1) man pages. "Signed Solaris 10 Binaries?" blog by Darren Moffat (2005) shows how to use elfsign. "Simple CLI based CA on Solaris" blog by Darren Moffat (2008) shows how to set up a simple CA for use with self-signed certificates. "How to Create a Certificate by Using the pktool gencert Command" System Administration Guide: Security Services (available at docs.oracle.com)

    Read the article

  • SQL Developer Q&A from ODTUG Tips & Tricks Webcast

    - by thatjeffsmith
    Another great webcast yesterday – if you’re a paying member of ODTUG you can watch the show for yourself in their archives. If not, you can get my slide deck off of SlideShare. About 150 of you brave souls sat through an entire hour of me talking and then 10 more minutes of Q&A. We went through everything rapid-fire style, so I thought I would post the questions and my refined answers here for your perusal. In the order in which I received them: You showed the preference to choose between resultsets in same tab or ain a new tab. I understand that we can not have it both using different hotkeys? For example: F5 run and resultset to same tab, ctrl-f5 same but to new tab? Sometimes you want the one other times the other. The questioner is asking about this preference, Tools Preferences Database Worksheet ‘Show query results in new tabs.’ This is an all or nothing proposition. But, there’s another, perhaps better way: the document PINs. If you have a result set you don’t want to lose, ‘pin it.’ Pin multiple result sets or plans for review and comparisons. You mentioned that sometimes it’s hard to remember where a certain preference is. I agree. So enhancement request: add a search-box to the preferences window. Maybe like in, for example, UltraEdit. It shows you all preferences containing your search criteria. Actually, we do have a search mechanism type the search string, we auto-filter the preferences Is there a version of SQL Developer that will connect to an 8i database (Yes, I realize how old that database version is!) Sorry, no. We also don’t have a version that will run on Windows 3.11 for Workgroups…probably. How do we access your blog? Carefully, and with much trepidation. When you’re ready, go to http://www.thatjeffsmith.com Is there a way to get good formatting with predefined settings? I believe the questioner is referring to the script output a la SQL*Plus formatting commands. Yes, there is. You can build your formatting commands into your login.sql script, and those will be applied for your script execution sessions. Example here. Why this version 4.0 doesn’t support external plugins? It does, it just requires the plugin developer to re-factor it for OSGi. This came about when we updated the JDeveloper framework to the later 11g/12c stuff. Any change in hookup with SVN? The only change with Subversion is that internally we’re using 1.7 stuff now. You can use SQLDev to work with a 1.8 SVN server, but if you get a working copy with a 1.8 client SQLDev won’t be able to do anything with it… Command line utilities ? improvements Yes! The long answer is here. Is that a Hint or a Comment?? /*CSV*/ It’s a comment – the database won’t recognize it, but SQLDev does when it goes through our statement pre-processor. We’ll redirect the output through our CSV formatter before displaying the results in the Script Output panel. That’s why this will ONLY work in SQL Developer. Are you selecting “”Run Script”" to get that CSV or HTML output, rather than “”Run Statement”"? Yes, the formatter hints like the CSV one mentioned above only make sense in a script output panel vs a grid. How do you save relational models once they’re defined? I’ve had trouble with setting one up, “”saving”" it, then the design work I did is longer there when loading it later. File – Data Modeler – Save. If you’re running the Modeler inside of SQL Developer, the menu’ing interface can get a bit tricky. That’s why I recommend using the stand along if you’re doing anything with a model that takes more than 5 minutes. See how the Data Modeler menus are folded up under the SQL Dev menus? Can u unplug and plug into another container in a database with only sqldeveloper? Yes, you can ‘Detach’ a multitentant 12c Database ‘pluggable’ and plug it into another instance. You have the option to copy or move the files. This isn’t a trivial operation, pay attention Can you run APEX code directly on the adopter? No, at least not as I understand your question. Give me an example and I can give you a better example. Is there a way that when u click on a particular table it wouldn’t show the table with the info but just to see the columns underneath clicking on the node? Yes, another one of my tips! Disable Tools Preferences Datbase ObjectViewer ‘Open Object on Single Click.’ Is there a patch to allow a double click on a procedure on an open package body to take you to that procedure in the editor? This has been fixed for EA3 – to be released soon. Can you open the spec with the body? You can open the spec or the body, and then also open the other. But you can’t open both with a single click. So if you want you can set it to CSV but can you also see it as a regular result set in rows and then click in the results to export to excel? If you run your query as a statement with Ctrl-Enter, you can send the data to Excel via the Export dialog. Will it do intellisense like using the alias and pop up the column, object names? Yes! You can select more than one column… Can a DBA turn off items from a high level for users so the only thing they can perform would be selects? A DBA should turn things ON, not OFF. Create a user with only CONNECT and required SELECT privs and you’re good to go, regardless of which application they are using. I use PL/SQL Developer from allround automations and was SQL Developer illiterate and now I like this for myself as a DBA. Now I get to train developers on this tool since they have been asking how to use this tool. Thank you. No, THANK YOU! Can you run multi queries in the worksheet after you added it to the worksheet? Yes, highlight what you want to run, and hit Ctrl-Enter. Can you export the result sets to excel, etc. Yes. In version 4.0 and going forward, I recommend you use the XLSX option for exports. It will run faster and consume much, much less memory. Will this be available after the webinar? If you are a ODTUG member, check out the webinar recordings in the archives. That’s worth the $99 right there. Ask your boss if they have $99 in their training budget for you. If not, maybe time to look for another job? Can you run command lines from this tool? Like executes without issuing a command line prompt? Ok, I’m stumped on this one. Not sure what you’re asking. You can setup external tools under the Tools menu, and from there you could probably rig what you’re looking for, but I’m not sure what you’re looking for… This maybe?Where and when to put the program Is there any way to save a copy database command set (certain tables/views etc) in a script? Yes! Create a cart with the objects you want to be used in the Copy. Then use the new command-line interface to kick off SQL Developer to do the copy of those said objects. How can we export the preference and then import them into different or same version of SQL Developer ? Today, there’s no interface for this. But you could copy the files around manually…Kris Rice has a cool idea where you can set your preferences to be saved to your local drop box folder and then you can use SQL Developer from anywhere with the same preferences What happens to SQL*Plus commands like COL & BREAK Nothing. Those are not currently supported. Is there a place where all “”hotkey”" functionality is listed? thanks Yes. Tools – Preferences – Shortcut Keys. And you can change them! Any tips for the DBA side of things? will the SQL generated for objects have more information (e.g. user privileges) in v4? You can get this now. In Tools – Preferences – Database – Utilities – Export, check ‘Grants.’ Voila! You now have the code necessary to recreate your object privileges Is there a limit on the number of rows that could be imported / exported from/to excel ? The only hard-coded limit lies in Excel. For best performance, use v4 and XLSX formats for Exports. Is there a way to see/watch active sessions to see current SQL and the explain plan being used, etc. Kind of like that frog product. Cough, yes. Tools – Monitor Sessions. Click on session, see SQL and plan. The plan was added in v4. If you’re not in version 4, use the Reports – Active Sessions to get the plans. In the DBA section is there a way to manage say tablespaces to add data files, shrink, edit profiles, etc. Yes, we support all of that. View – DBA. Connect, go to the Storage node. Are you (Jeff) available for a live presentation at our Oracle User Group here in Indiana? Maybe. Email me and we’ll see, [email protected] Where do I go to download sql developer 4.0? The Internet of course! Can you directly edit query results? Nope. But what I think you’re asking is, can I edit the data in the tables that are reflected in my query results? You can change the query results by changing your query of course. Or this. Can you show html example? Sure. I’d embed the HTML here, but it’s a lot of code, try it for yourself! How can I quickly close many SQL worksheet windows, but not all? Window – Documents. Multi-select, hit the ‘Close Document(s)’ button. What does the vertical red line denote? That’s the margin. Tells you when you’ve typed too far and it’s time for a carriage return. Did DBA/Database Status/Instance Viewer make it officially into 4.0? It was sort-of included in the first EA. I have NO idea what you’re talking about, WINK-WINK. No, it’s not in v4.0. Is there a “”handy”" way to debug trigger code? Yes, open your trigger. Hit the debug button. Works great as long as it’s a DML trigger. Will you make your presentation file available for us ( in PPT and/or PDF format ) ? It’s on SlideShare. How do you get SqlDeveloper to escape ‘ correctly when you use the wizard to export data as insert statements? If it’s not doing that, it’s a bug. I’ll take a look at that scenario ASAP.

    Read the article

  • Application Demos in UPK

    - by [email protected]
    Over the years, User Productivity Kit has expanded to include solutions to many project challenges. As of UPK 3.6.1, solutions are provided for pre and post application go-live learning, application testing, system documentation, presentation output, and more. New in UPK 3.6.1 are additional features that can be used effectively for application demo purposes. This can come in handy when you need to do a demo but don't want to show or can't show the live application. Maybe you're doing a presentation for a group of project stakeholders and want to focus on the business workflow implemented by the application rather than the mechanics of using it. Or possibly, you need to show the application but you're disconnected from any network preventing you from running the live application. In any of these cases, a presentation aid that represents the live application is what's needed. Previous versions of the UPK topic player would allow you to do this but would always show those UPK user interface elements that help a user learn the application. When you're presenting the narrative live, the UPK bubbles can be a distraction. UPK 3.6.1 provides some new features that allow you to control whether the bubbles display. There are two ways to hide bubbles in a topic. The first is a topic property that allows you to control bubbles across the entire topic. There are 3 settings for the Show Bubbles topic property. The default setting is Use frame settings which allows you to control whether bubbles display on a frame by frame basis. When you choose Always, the bubbles will always display regardless of the frame setting. The final choice is Never. Choosing Never will hide every bubble in your topic with one setting change. As with Always, choosing Never will ignore the frame setting. The second way to control the bubbles is at the frame level. First ensure that the topic's Show Bubbles property is set to Use frame settings. Navigate to the frame on which you want to turn off the bubble and click the Display bubble for this frame button to turn off the bubble. When you play the topic, the bubble will no longer be displayed. Depending on your needs, you might also use another longstanding UPK feature that allows you to control whether the action area displays on a frame. Just click the Action area on/off button to toggle its display. I've found the frame properties to be useful beyond creating presentation aids. When creating "See It!" only topics for more advanced users, I may hide the bubbles on some of the more straightforward frames. For example, if I have a form where one needs to fill out an address, I may display the first bubble in the sequence and explain what the subsequent steps are doing. I then hide bubbles on the remaining frames which are the more mechanical steps of entering the address. We'd like to hear your thoughts on this new UPK feature. Use the comments below to tell us how you've used it. John Zaums Senior Director, Product Development Oracle User Productivity Kit

    Read the article

  • ODI SDK: Retrieving Information From the Logs

    - by Christophe Dupupet
    It is fairly common to want to retrieve data from the ODI logs: statistics, execution status, even the generated code can be retrieved from the logs. The ODI SDK provides a robust set of APIs to parse the repository and retreve such information. To locate the information you are looking for, you have to keep in mind the structure of the logs: sessions contain steps; steps containt tasks. The session is the execution unit: basically, each time you execute something (interface, package, procedure, scenario) you create a new session. The steps are the individual entries found in a session: these will be the icons in your package for instance. Or if you are running an interface, you will have one single step: the interface itself. The tasks will represent the more atomic elements of the steps: the individual DDL, DML, scripts and so forth that are generated by ODI, along with all the detailed statistics for that task. All these details can be retrieved with the SDK. Because I had a question recently on the API ODIStepReport, I focus explicitly in this code on Scenario logs, but a lot more can be done with these APIs. Here is the code sample (you can just cut and paste that code in your ODI 11.1.1.6 Groovy console). Just save, adapt the code to your environment (in particular to connect to your repository) and hit "run" //Created by ODI Studioimport oracle.odi.core.OdiInstanceimport oracle.odi.core.config.OdiInstanceConfigimport oracle.odi.core.config.MasterRepositoryDbInfo import oracle.odi.core.config.WorkRepositoryDbInfo import oracle.odi.core.security.Authentication  import oracle.odi.core.config.PoolingAttributes import oracle.odi.domain.runtime.scenario.finder.IOdiScenarioFinder import oracle.odi.domain.runtime.scenario.OdiScenario import java.util.Collection import java.io.* /* ----------------------------------------------------------------------------------------- Simple sample code to list all executions of the last version of a scenario,along with detailed steps information----------------------------------------------------------------------------------------- */ /* update the following parameters to match your environment => */def url = "jdbc:oracle:thin:@myserver:1521:orcl"def driver = "oracle.jdbc.OracleDriver"def schema = "ODIM1116"def schemapwd = "ODIM1116PWD"def workrep = "WORKREP1116"def odiuser= "SUPERVISOR"def odiuserpwd = "SUNOPSIS" // Rather than hardcoding the project code and folder name, // a great improvement here would be to parse the entire repository def scenario_name = "LOAD_DWH" /*Scenario Name*/ /* <=End of the update section */ //--------------------------------------//Connection to the repository// Note for ODI 11.1.1.6: you could use predefined odiInstance variable if you are // running the script from a Studio that is already connected to the repository def masterInfo = new MasterRepositoryDbInfo(url, driver, schema, schemapwd.toCharArray(), new PoolingAttributes())def workInfo = new WorkRepositoryDbInfo(workrep, new PoolingAttributes())def odiInstance = OdiInstance.createInstance(new OdiInstanceConfig(masterInfo, workInfo)) //--------------------------------------// In all cases, we need to make sure we have authorized access to the repositorydef auth = odiInstance.getSecurityManager().createAuthentication(odiuser, odiuserpwd.toCharArray())odiInstance.getSecurityManager().setCurrentThreadAuthentication(auth) //--------------------------------------// Retrieve the scenario we are looking fordef odiScenario = ((IOdiScenarioFinder)odiInstance.getTransactionalEntityManager().getFinder(OdiScenario.class)).findLatestByName(scenario_name) if (odiScenario == null){    println("Error: cannot find scenario "+scenario_name);    return} //--------------------------------------// Retrieve all reports for the scenario def OdiScenarioReportsList = odiScenario.getScenarioReports() println("*** Listing all reports for Scenario \""+scenario_name+"\" ") //--------------------------------------// For each report, print the folowing:// - start time// - duration// - status// - step reports: selection of details for (s in OdiScenarioReportsList){        println("\tStart time: " + s.getSessionStartTime())        println("\tDuration: " + s.getSessionDuration())        println("\tStatus: " + s.getSessionStatus())                def OdiScenarioStepReportsList = s.getStepReports()        for (st in OdiScenarioStepReportsList){            println("\t\tStep Name: " + st.getStepName())            println("\t\tStep Resource Name: " + st.getStepResourceName())            println("\t\tStep Start time: " + st.getStepStartTime())            println("\t\tStep Duration: " + st.getStepDuration())            println("\t\tStep Status: " + st.getStepStatus())            println("\t\tStep # of inserts: " + st.getStepInsertCount())            println("\t\tStep # of updates: " + st.getStepUpdateCount()+'\n')      }      println("\t")}

    Read the article

  • O' Reilly Deal of the Day 30/Nov/2011 - Programming Windows® Identity Foundation

    - by TATWORTH
    Today's Deal of the Day from O'Reilly at http://shop.oreilly.com/product/9780735627185.do is "Programming Windows® Identity Foundation" "Get hands-on guidance designed to help you put the newest .NET Framework component- Windows Identity Foundation, the identity and access logic for all on-premises and cloud development- to work.".I have reviewed this book previously at http://geekswithblogs.net/TATWORTH/archive/2010/12/24/programming-windows-identity-foundation---isbn-978-0-7356-2718-5.aspx. It is a book I recommend to all Dot Net Development teams.

    Read the article

  • Cloud Based Load Testing Using TF Service &amp; VS 2013

    - by Tarun Arora [Microsoft MVP]
    Originally posted on: http://geekswithblogs.net/TarunArora/archive/2013/06/30/cloud-based-load-testing-using-tf-service-amp-vs-2013.aspx One of the new features announced as part of the Visual Studio 2013 Ultimate Preview is ‘Cloud Based Load Testing’. In this blog post I’ll walk you through, What is Cloud Based Load Testing? How have I been using this feature? – Success story! Where can you find more resources on this feature? What is Cloud Based Load Testing? It goes without saying that performance testing your application not only gives you the confidence that the application will work under heavy levels of stress but also gives you the ability to test how scalable the architecture of your application is. It is important to know how much is too much for your application! Working with various clients in the industry I have realized that the biggest barriers in Load Testing & Performance Testing adoption are, High infrastructure and administration cost that comes with this phase of testing Time taken to procure & set up the test infrastructure Finding use for this infrastructure investment after completion of testing Is cloud the answer? 100% Visual Studio Compatible Scalable and Realistic Start testing in < 2 minutes Intuitive Pay only for what you need Use existing on premise tests on cloud There are a lot of vendors out there offering Cloud Based Load Testing, to name a few, Load Storm Soasta Blaze Meter Blitz And others… The question you may want to ask is, why should you go with Microsoft’s Cloud based Load Test offering. If you are a Microsoft shop or already have investments in Microsoft technologies, you’ll see great benefit in the natural integration this offers with existing Microsoft products such as Visual Studio and Windows Azure. For example, your existing Web tests authored in Visual Studio 2010 or Visual Studio 2012 will run on the cloud without requiring any modifications what so ever. Microsoft’s cloud test rig also supports API based testing, for example, if you are building a WPF application which consumes WCF services, you can write unit tests to invoke the WCF service, these tests can be run on the cloud test rig and loaded with ‘N’ concurrent users for performance testing. If you have your assets already hosted in the Azure and possibly in the same data centre as the Cloud test rig, your Azure app will not incur a usage cost because of the generated traffic since the traffic is coming from the same data centre. The licensing or pricing information on Microsoft’s cloud based Load test service is yet to be announced, but I would expect this to be priced attractively to match the market competition.   The only additional configuration required for running load tests on Microsoft Cloud based Load Tests service is to select the Test run location as Run tests using Visual Studio Team Foundation Service, How have I been using Microsoft’s Cloud based Load Test Service? I have been part of the Microsoft Cloud Based Load Test Service advisory council for the last 7 months. This gave the opportunity to see the product shape up from concept to working solution. I was also the first person outside of Microsoft to try this offering out. This gave me the opportunity to test real world application at various clients using the Microsoft Load Test Service and provide real world feedback to the Microsoft product team. One of the most recent systems I tested using the Load Test Service has been an insurance quote generation engine. This insurance quote generation engine is,   hosted in Windows Azure expected to get quote requests from across the globe expected to handle 5 Million quote requests in a day (not clear how this load will be distributed across the day) There was no way, I could simulate such kind of load from on premise without standing up additional hardware. But Microsoft’s Cloud based Load Test service allowed me to test my key performance testing scenarios, i.e. Simulate expected Load, Endurance Testing, Threshold Testing and Testing for Latency. Simulating expected load: approach to devising a load pattern My approach to devising a load test pattern has been to run the test scenario with 1 user to figure out the response time. Then work out how many users are required to reach the target load. So, for example, to invoke 1 quote from the quote engine software takes 0.5 seconds. Now if you do the math,   1 quote request by 1 user = 0.5 seconds   quotes generated by 1 user in 24 hour = 1 * (((2 * 60) * 60) * 24) = 172,800   quotes generated by 30 users in 24 hours = 172,800 * 30 =  5,184,000 This was a very simple example, if your application requires more concurrent users to test scenario’s such as caching, etc then you can devise your own load pattern, some examples of load test patterns can be found here.  Endurance Testing To test for endurance, I loaded the quote generation engine with an expected fixed user load and ran the test for very long duration such as over 48 hours and observed the affect of the long running test on the Azure infrastructure. Currently Microsoft Load Test service does not support metrics from the machine under test. I used Azure diagnostics to begin with, but later started using Cerebrata Azure Diagnostics Manager to capture the metrics of the machine under test. Threshold Testing To figure out how much user load the application could cope with before falling on its belly, I opted to step load the quote generation engine by incrementing user load with different variations of incremental user load per minute till the application crashed out and forced an IIS reset. Testing for Latency Currently the Microsoft Load Test service does not support generating geographically distributed load, I however, deployed the insurance quote generation engine in different Azure data centres and ran the same set of performance tests to measure for latency. Because I could compare load test results from different runs by exporting the results to excel (this feature is provided out of the box right from Visual Studio 2010) I could see the different in response times. More resources on Microsoft Cloud based Load Test Service A few important links to get you started, Download Visual Studio Ultimate 2013 Preview Getting started guide for load testing using Team Foundation Service Troubleshooting guide for FAQs and known issues Team Foundation Service forum for questions and support Detailed demo and presentation (link to Tech-Ed session recording) Detailed demo and presentation (link to Build session recording) There a few limits on the usage of Microsoft Cloud based Load Test service that you can read about here. If you have any feedback on Microsoft Cloud based Load Test service, feel free to share it with the product team via the Visual Studio User Voice forum. I hope you found this useful. Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Stay tuned!

    Read the article

  • Build Explorer version 1.1 for Visual Studio Team Explorer is released

    - by terje
    Our free extension to Visual Studio , the folder based Build Explorer Version 1.1 has now been released, and uploaded to the Visual Studio Gallery and Codeplex. We have collected up a few changes and some bugs, as follows: Changes: Queue Default Builds can now be optionally fully enabled, fully disabled or enabled just for leaf nodes (=disabled for folders).  If you got a large number of builds it was pretty scary to be able to launch all of them with just one click.  However, it is nice to avoid having the dialog box up when you want to just run off a single build.  That’s the reasoning between the 3rd choice here. Auto fill-in of the builds at start up and refresh  This was a request that came up a lot, and which was also irritating to us.  When the Team Project is opened, the Build explorer will start by itself and fill up it’s tree. So you don’t need to click the node anymore. There was also quite a bit of flashing when the tree filled up, this has been reduced to just a single top level fill before it collapses the node. The speed of the buildup of the tree has also been increased. The “All Build Definitions” node is now shown on top of the list Login box appeared in certain cross domain situations. This was a fix for the TF30063 authentication problem we had in the beginning.  Hopefully the new code has that fixed properly so that both the login box and the TF30063 are gone forever.  Our testing so far seems to indicate it works.  If anyone gets a real problem here there are two workarounds: 1) Turn off the auto refresh to reduce the issue. If this doesn’t fix it, then 2) please reinstall the former version (go to the codeplex download site if you don’t have it anymore)  Write a comment to this blog post with a description of what happens, and I will send a temporary fix asap. Bug fixes: The folder name matching was case sensitive, so “Application.CI” and “application.CI” created two different folders.  View all builds not shown for leaf odes, and view builds didn’t work in all cases.  There was some inconsistencies here which have been fixed. Partly fixed:  The context menu to queue a new build for disabled builds should be removed, but that was a difficult one, and is still on the list, but the command will not do anything for a disabled build. Using the Queue Default Builds on a folder, and if it had some disabled builds below an error box appeared and ruined the whole experience. As a result of these fixes there has been introduced some new options, as shown below:   The two first settings, the Separator symbol and the options for how to handle Queuing of default builds are set per Team Project, and is stored in the TFS source control under the BuildProcessTemplates folder, with the name Inmeta.VisualStudio.BuildExplorer.Settings.xml The next two settings need some explanations.  They handle the behavior for the auto update of the build folders.  First, these are stored in the local registry per user, at the key HKEY_CURRENT_USER/Software\Inmeta\BuildExplorer. The first option Use Timed Refresh at Startup, if turned off, you will need to click the node as it is done in Version 1.0.  The second option is a timed value, the time after the Build explorer node is created and until the scanning of the Build folders start.  It is assumed that this is enough, and the tests so far indicates this.  If you have very many builds and you see that the explorer don’t get them all, try to increase this value, and of course, notify me of your case, either here or on the Visual Gallery site.

    Read the article

  • The 2010 JavaOne Java EE 6 Panel: Where We Are and Where We're Going

    - by janice.heiss(at)oracle.com
    An informative article, based on a 2010 JavaOne (San Francisco, California) panel session, surveys a variety of expert perspectives on Java EE 6.The panel, moderated by Oracle's Alexis Moussine-Pouchkine, consisted of:* Adam Bien, Consultant Author/ Speaker, adam-bien.com* Emmanuel Bernard, Principal Software Engineer, JBoss by Red Hat,* David Blevins, Senior Software Engineer, and co-founder of the OpenEJB project and a     founder of Apache Geronimo* Roberto Chinnici, Technical Staff Consulting Member, Oracle* Jim Knutson, Java EE Architect, IBM* Reza Rahman, Lead Engineer, Caucho Technology, Inc.,* Krasimir Semerdzhiev, Development Architect, SAP Labs BulgariaThe panel addressed such topics as Platform and API Adoption, Contexts and Dependency Injection (CDI), Java EE vs. Spring, the impact of Java EE 6 on tooling and testing, Java EE.next, along with a variety of audience questions. Read the entire article for the whole picture.

    Read the article

  • Podcast Show Notes: Evolving Enterprise Architecture

    - by Bob Rhubart
    The latest series of ArchBeat podcast programs grew out of another virtual meet-up, held on March 11. As with previous meet-ups, I sent out a general invitation to the roster of previous ArchBeat panelists to join me on Skype to talk about whatever topic comes up. For this event, Oracle ACE Directors Mike van Alst and Jordan Braunstein  showed up, along with Oracle product manager Jeff Davies.  The result was an impressive and wide-ranging discussion on the evolution of Enterprise Architecture, the role of technology in EA, the impact of social computing, and challenge of having three generations of IT people at work in the enterprise – each with different perspectives on technology. Mike, Jordan, and Jeff talked for more than an hour, and the conversation was so good that slicing and dicing it to meet the time constraints for these podcasts has been a challenge. The first two segments of the conversation are now available. Listen to Part 1 Listen to Part 2 Part 3 will go live next week, and an unprecedented fourth segment will follow. These guys have strong opinions, and while there is common ground, they don’t always agree. But isn’t that what a community is all about? I suspect that you’ll have questions and comments after listening, so I encourage you to reach out to Mike, Jordan, and Jeff  via the following links: Mike van Alst Blog | Twitter | LinkedIn | Business |Oracle Mix | Oracle ACE Profile Jordan Braunstein Blog | Twitter | LinkedIn | Business | Oracle Mix | Oracle ACE Profile Jeff Davies Homepage | Blog | LinkedIn | Oracle Mix (Also check out Jeff’s book: The Definitive Guide to SOA: Oracle Service Bus)   Coming Soon ArchBeat’s microphones were there for the panel discussions at the recent Oracle Technology Network Architect Days in Dallas and Anaheim. Excerpts from those conversations will be available soon. Stay tuned: RSS Technorati Tags: oracle,otn,enterprise architecture,podcast. arch2arch,archbeat del.icio.us Tags: oracle,otn,enterprise architecture,podcast. arch2arch,archbeat

    Read the article

  • C# Neural Networks with Encog

    - by JoshReuben
    Neural Networks ·       I recently read a book Introduction to Neural Networks for C# , by Jeff Heaton. http://www.amazon.com/Introduction-Neural-Networks-C-2nd/dp/1604390093/ref=sr_1_2?ie=UTF8&s=books&qid=1296821004&sr=8-2-spell. Not the 1st ANN book I've perused, but a nice revision.   ·       Artificial Neural Networks (ANNs) are a mechanism of machine learning – see http://en.wikipedia.org/wiki/Artificial_neural_network , http://en.wikipedia.org/wiki/Category:Machine_learning ·       Problems Not Suited to a Neural Network Solution- Programs that are easily written out as flowcharts consisting of well-defined steps, program logic that is unlikely to change, problems in which you must know exactly how the solution was derived. ·       Problems Suited to a Neural Network – pattern recognition, classification, series prediction, and data mining. Pattern recognition - network attempts to determine if the input data matches a pattern that it has been trained to recognize. Classification - take input samples and classify them into fuzzy groups. ·       As far as machine learning approaches go, I thing SVMs are superior (see http://en.wikipedia.org/wiki/Support_vector_machine ) - a neural network has certain disadvantages in comparison: an ANN can be overtrained, different training sets can produce non-deterministic weights and it is not possible to discern the underlying decision function of an ANN from its weight matrix – they are black box. ·       In this post, I'm not going to go into internals (believe me I know them). An autoassociative network (e.g. a Hopfield network) will echo back a pattern if it is recognized. ·       Under the hood, there is very little maths. In a nutshell - Some simple matrix operations occur during training: the input array is processed (normalized into bipolar values of 1, -1) - transposed from input column vector into a row vector, these are subject to matrix multiplication and then subtraction of the identity matrix to get a contribution matrix. The dot product is taken against the weight matrix to yield a boolean match result. For backpropogation training, a derivative function is required. In learning, hill climbing mechanisms such as Genetic Algorithms and Simulated Annealing are used to escape local minima. For unsupervised training, such as found in Self Organizing Maps used for OCR, Hebbs rule is applied. ·       The purpose of this post is not to mire you in technical and conceptual details, but to show you how to leverage neural networks via an abstraction API - Encog   Encog ·       Encog is a neural network API ·       Links to Encog: http://www.encog.org , http://www.heatonresearch.com/encog, http://www.heatonresearch.com/forum ·       Encog requires .Net 3.5 or higher – there is also a Silverlight version. Third-Party Libraries – log4net and nunit. ·       Encog supports feedforward, recurrent, self-organizing maps, radial basis function and Hopfield neural networks. ·       Encog neural networks, and related data, can be stored in .EG XML files. ·       Encog Workbench allows you to edit, train and visualize neural networks. The Encog Workbench can generate code. Synapses and layers ·       the primary building blocks - Almost every neural network will have, at a minimum, an input and output layer. In some cases, the same layer will function as both input and output layer. ·       To adapt a problem to a neural network, you must determine how to feed the problem into the input layer of a neural network, and receive the solution through the output layer of a neural network. ·       The Input Layer - For each input neuron, one double value is stored. An array is passed as input to a layer. Encog uses the interface INeuralData to hold these arrays. The class BasicNeuralData implements the INeuralData interface. Once the neural network processes the input, an INeuralData based class will be returned from the neural network's output layer. ·       convert a double array into an INeuralData object : INeuralData data = new BasicNeuralData(= new double[10]); ·       the Output Layer- The neural network outputs an array of doubles, wraped in a class based on the INeuralData interface. ·        The real power of a neural network comes from its pattern recognition capabilities. The neural network should be able to produce the desired output even if the input has been slightly distorted. ·       Hidden Layers– optional. between the input and output layers. very much a “black box”. If the structure of the hidden layer is too simple it may not learn the problem. If the structure is too complex, it will learn the problem but will be very slow to train and execute. Some neural networks have no hidden layers. The input layer may be directly connected to the output layer. Further, some neural networks have only a single layer. A single layer neural network has the single layer self-connected. ·       connections, called synapses, contain individual weight matrixes. These values are changed as the neural network learns. Constructing a Neural Network ·       the XOR operator is a frequent “first example” -the “Hello World” application for neural networks. ·       The XOR Operator- only returns true when both inputs differ. 0 XOR 0 = 0 1 XOR 0 = 1 0 XOR 1 = 1 1 XOR 1 = 0 ·       Structuring a Neural Network for XOR  - two inputs to the XOR operator and one output. ·       input: 0.0,0.0 1.0,0.0 0.0,1.0 1.0,1.0 ·       Expected output: 0.0 1.0 1.0 0.0 ·       A Perceptron - a simple feedforward neural network to learn the XOR operator. ·       Because the XOR operator has two inputs and one output, the neural network will follow suit. Additionally, the neural network will have a single hidden layer, with two neurons to help process the data. The choice for 2 neurons in the hidden layer is arbitrary, and often comes down to trial and error. ·       Neuron Diagram for the XOR Network ·       ·       The Encog workbench displays neural networks on a layer-by-layer basis. ·       Encog Layer Diagram for the XOR Network:   ·       Create a BasicNetwork - Three layers are added to this network. the FinalizeStructure method must be called to inform the network that no more layers are to be added. The call to Reset randomizes the weights in the connections between these layers. var network = new BasicNetwork(); network.AddLayer(new BasicLayer(2)); network.AddLayer(new BasicLayer(2)); network.AddLayer(new BasicLayer(1)); network.Structure.FinalizeStructure(); network.Reset(); ·       Neural networks frequently start with a random weight matrix. This provides a starting point for the training methods. These random values will be tested and refined into an acceptable solution. However, sometimes the initial random values are too far off. Sometimes it may be necessary to reset the weights again, if training is ineffective. These weights make up the long-term memory of the neural network. Additionally, some layers have threshold values that also contribute to the long-term memory of the neural network. Some neural networks also contain context layers, which give the neural network a short-term memory as well. The neural network learns by modifying these weight and threshold values. ·       Now that the neural network has been created, it must be trained. Training a Neural Network ·       construct a INeuralDataSet object - contains the input array and the expected output array (of corresponding range). Even though there is only one output value, we must still use a two-dimensional array to represent the output. public static double[][] XOR_INPUT ={ new double[2] { 0.0, 0.0 }, new double[2] { 1.0, 0.0 }, new double[2] { 0.0, 1.0 }, new double[2] { 1.0, 1.0 } };   public static double[][] XOR_IDEAL = { new double[1] { 0.0 }, new double[1] { 1.0 }, new double[1] { 1.0 }, new double[1] { 0.0 } };   INeuralDataSet trainingSet = new BasicNeuralDataSet(XOR_INPUT, XOR_IDEAL); ·       Training is the process where the neural network's weights are adjusted to better produce the expected output. Training will continue for many iterations, until the error rate of the network is below an acceptable level. Encog supports many different types of training. Resilient Propagation (RPROP) - general-purpose training algorithm. All training classes implement the ITrain interface. The RPROP algorithm is implemented by the ResilientPropagation class. Training the neural network involves calling the Iteration method on the ITrain class until the error is below a specific value. The code loops through as many iterations, or epochs, as it takes to get the error rate for the neural network to be below 1%. Once the neural network has been trained, it is ready for use. ITrain train = new ResilientPropagation(network, trainingSet);   for (int epoch=0; epoch < 10000; epoch++) { train.Iteration(); Debug.Print("Epoch #" + epoch + " Error:" + train.Error); if (train.Error > 0.01) break; } Executing a Neural Network ·       Call the Compute method on the BasicNetwork class. Console.WriteLine("Neural Network Results:"); foreach (INeuralDataPair pair in trainingSet) { INeuralData output = network.Compute(pair.Input); Console.WriteLine(pair.Input[0] + "," + pair.Input[1] + ", actual=" + output[0] + ",ideal=" + pair.Ideal[0]); } ·       The Compute method accepts an INeuralData class and also returns a INeuralData object. Neural Network Results: 0.0,0.0, actual=0.002782538818034049,ideal=0.0 1.0,0.0, actual=0.9903741937121177,ideal=1.0 0.0,1.0, actual=0.9836807956566187,ideal=1.0 1.0,1.0, actual=0.0011646072586172778,ideal=0.0 ·       the network has not been trained to give the exact results. This is normal. Because the network was trained to 1% error, each of the results will also be within generally 1% of the expected value.

    Read the article

  • DDD North 3 Presentation and source code &ndash; &lsquo;Event Store - an introduction to a DSD for event sourcing and notifications&rsquo;

    - by Liam Westley
    Originally posted on: http://geekswithblogs.net/twickers/archive/2013/10/15/ddd-north-3-presentation-and-source-code-ndash-lsquoevent-store.aspxThank you everyone at DDD North Thanks to all the people who helped organise the cracking conference that is DDD North 3, returning to Sunderland, and the great facilities at the University of Sunderland, and the fine drinks reception at Sunderland Software City.  The whole event wouldn’t be possible without the sponsors who ensured over 400 people were kept fed and watered so they could enjoy the impressive range of sessions. And lastly, a thank you to all those delegates who gave up their free time on a Saturday to spend a day dashing between lecture rooms, including a late change to my room which saw 40 people having to brave a journey between buildings in the fine drizzle. The enthusiasm from the delegates always helps recharge my geek batteries. Presentation and source code My presentation, source code, Event Store runners and text files containing the various command line parameters used for curl is now available on GitHub; https://github.com/westleyl/DDDNorth3-EventStore. Don’t worry if you don’t have a GitHub account, you don’t need one, you can just click on the Download Zip button on the right hand menu to download all the files as a single ZIP file.  If all you want is the PowerPoint presentation, go to https://github.com/westleyl/DDDNorth3-EventStore/blob/master/Powerpoint/DDDNorth-EventStore.pptx, and click on the View Raw button. Downloading and installing Event Store and Tools Download Event Store http://download.geteventstore.com – I unzipped these files into C:\EventStore\v2.0.1 Download Curl from http://curl.haxx.se/download.html – I downloaded Win64 Generic (with SSL) and unzipped these files into C:\curl version 7.31.0 Running the tools I used in my presentation Demonstration 1 (running Event Store) You can use one of my Event Store runner command files to run the single node version of Event Store, using default ports of 2213 for HTTP and 1113  for TCP, and with a wildcard HTTP pattern.  Both take a single command line parameter to specify the location of the data and log files.  The runners assume the single node executable is located in C:\EventStore\v2.0.1, and will placed data files and logs beneath C:\EventStore\Data, i.e. RunEventStore.cmd TestData1 This will create data files in C:\EventStore\Data\TestData1\Data and log files in C:\EventStore\Data\TestData1\logs. If, when running Event Store you may see the following message, [03288,15,06:23:00.622] Failed to start http server Access is denied You will either need to run Event Store in an administrator console window, or you can use the netsh command to create a firewall permission to allow HTTP listening (this will need to be run, once, in an administrator console window), netsh http add urlacl url=http://*:2213/ user=liam You can always delete this later by running the delete; netsh http delete urlacl url=http://*:2213/ If you want to confirm that everything is running OK, open the management console in a browser by navigating to http://127.0.0.1:2213. If at any point you are asked for a user name and password use the default of ‘admin’/‘changeit’.   Demonstration 2 (reading and adding data, curl) In my second demonstration I used curl directly from the console to read streams, write events and then read back those events. On GitHub I have included is a set of curl commands, CurlCommandLine.txt, and a sample data file, SampleData.json, to load an event into a DDDNorth3 stream. As there is not much data in the Event Store at this point I used the $stats-127.0.0.1:2113 which is a stream containing performance statistics for Event Store and is updated every 30 seconds (default). Demonstration 3 (projections) On GitHub I have included a sample projection, Projection-ByRoom.txt, which will create streams based on the room on which a session was held on the DDDNorth3 agenda. Browse to the management console, http://127.0.0.1:2213.  Click on Projections, New Projection, give it a name, Sessions-ByRoom, and copy in the JavaScript in the Projection-ByRoom.txt file.  Select Continuous, tick Emit Enabled and then click on Post. It should run immediately. You may by challenged for the administration login for the management console, if so use the default user name and password; 'admin'/'changeit'.   Demonstration 4 (C# client) The final demonstration was the Visual Studio 2012 project using the Event Store client – referenced directly as C:\EventStore\v2.0.1\EventStore.ClientAPI.dll, although you can switch this to the latest Event Store client NuGet package. The source code provides a console app for viewing projections with the projection manager (HTTP connection), as well as containing a full set of data for the entire DDDNorth3 agenda.  It also deals with the strategy for reading newest events backwards to older events and ignoring older events that have been superseded. Resources Event Store home page: http://www.geteventstore.com/ Event Store source code on GitHub: https://github.com/eventstore/eventstore Event Store documentation on GitHub: https://github.com/eventstore/eventstore/wiki (includes index to @RobAshton’s blog series on Event Store at https://github.com/eventstore/eventstore/wiki#rob-ashton---projections-series) Event Store forum in Google Groups: https://groups.google.com/forum/?fromgroups#!forum/event-store TopShelf Windows service wrapper is available on github: https://gist.github.com/trbngr/5083266

    Read the article

  • Commercial Software Development – my presentation for DDD Scotland now available for download

    - by Liam Westley
    Thanks to everyone who voted me onto the DDD Scotland agenda, and for the fantastic audience some of whom you can see in Craig Murphy's photos of the event, http://www.flickr.com/photos/craigmurphy/4592461745/in/set-72157624025673156 http://www.flickr.com/photos/craigmurphy/4592467645/in/set-72157624025673156 I hope those who came enjoyed the session had a good time, and for them or those who were on one of the other tracks, or who couldn’t squeeze in; I’ve uploaded the presentation for you to download.  I created a more simple, and smaller, PowerPoint without all the fancy animations and video clips, which is available as a compressed ZIP file,   http://www.tigernews.co.uk/blog-twickers/dddscot/commercialsoftwaredev.zip I also printed the presentation with speaker notes (which contain most of the information I was talking about) using PDFCreator, which is available as an Adobe Acrobat PDF here,   http://www.tigernews.co.uk/blog-twickers/dddscot/commercialsoftwaredev.pdf ... and if PowerPoint presentations don't do it for you, also thanks to Craig Murphy, you can watch a video of the presentation that I gave at DDD8 in Microsoft TVP, Reading,  http://vimeo.com/9216563

    Read the article

  • How to Troubleshoot TFS Build Server Failure?

    - by Tarun Arora
    Ever found your self in this helpless situation where you think you have tried every possible suggestion on the internet to bring the build server back but it just won’t work. Well some times before hunting around for a solution it is important to understand what the problem is, if the error messages in the build logs don’t seem to help you can always enable tracing on the build server to get more information on what could possibly be the root cause of failure. In this blog post today I’ll be showing you how to enable tracing on, - TFS 2010/11 Server - Build Server - Client Enable Tracing on Team Foundation Server 2010/2011 On the Team Foundation Server navigate to C:\Program Files\Microsoft Team Foundation Server 2010\Application Tier\Web Services, right click web.config and from the context menu select edit.          Search for the <appSettings> node in the config file and set the value of the key ‘traceWriter’ to true.          In the <System.diagnostics> tag set the value of switches from 0 to 4 to set the trace level to maximum to write diagnostics level trace information.          Restart the TFS Application pool to force this change to take effect. The application pool restart will impact any one using the TFS server at present. Note - It is recommended that you do not make any changes to the TFS production application server, this can have serious consequences and can even jeopardize the installation of your server.          Download the Debug view tool from http://technet.microsoft.com/en-us/sysinternals/bb896647.aspx and set it to capture “Global Events”. Perform any actions in the Team Explorer on the client machine, you should be able to see a series of trace data in the debug view tool now.         Enable Tracing on Build Controller/Agents Log on to the Build Controller/Agent and Navigate to the directory C:\Program Files\Microsoft Team Foundation Server 2010\Tools         Look for the configuration file ‘TFSBuildServiceHost.exe.config’ if it is not already there create a new text file and rename it to ‘TFSBuildServiceHost.exe.config’         To Enable tracing uncomment the <system.diagnostics> and paste the snippet below if it is not already there. <configuration> <system.diagnostics> <switches> <add name="BuildServiceTraceLevel" value="4"/> </switches> <trace autoflush="true" indentsize="4"> <listeners> <add name="myListener" type="Microsoft.TeamFoundation.TeamFoundationTextWriterTraceListener, Microsoft.TeamFoundation.Common, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" initializeData="c:\logs\TFSBuildServiceHost.exe.log" /> <remove name="Default" /> </listeners> </trace> </system.diagnostics> </configuration> The highlighted path above is where the Log file will be created. If the folder is not already there then create the folder, also, make sure that the account running the build service has access to write to this folder.         Restart the build Controller/Agent service from the administration console (or net stop tfsbuildservicehost & net start tfsbuildservicehost) in order for the new setting to be picked up.         Enable TFS Tracing on the Client Machine On the client machine, shut down Visual Studio, navigate to C:\Program Files\Microsoft Visual Studio 10.0\Common 7\IDE          Search for devenv.exe.config, make a backup copy of the config file and right click the file and from the context menu select edit. If its not already there create this file.          Edit devenv.exe.config by adding the below code snippet before the last </configuration> tag <system.diagnostics> <switches> <add name="TeamFoundationSoapProxy" value="4" /> <add name="VersionControl" value="4" /> </switches> <trace autoflush="true" indentsize="3"> <listeners> <add name="myListener" type="Microsoft.TeamFoundation.TeamFoundationTextWriterTraceListener,Microsoft.TeamFoundation.Common, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" initializeData="c:\tf.log" /> <add name="perfListener" type="Microsoft.TeamFoundation.Client.PerfTraceListener, Microsoft.TeamFoundation.Client, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"/> </listeners> </trace> </system.diagnostics> The highlighted path above is where the Log file will be created. If the folder is not already there then create the folder. Start Visual Studio and after a bit of activity you should be able to see the new log file being created on the folder specified in the config file. Other Resources Below are some Key resource you might like to review. I would highly recommend the documentation, walkthroughs and videos available on MSDN.   Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Have you come across an interesting one to one with the build server, please share your experience here. Questions/Feedback/Suggestions, etc please leave a comment. Thank You! Share this post : CodeProject

    Read the article

  • Skinning af:selectOneChoice

    - by Duncan Mills
    A question came in today about how to skin the selection button ()  of an <af:selectOneChoice>. If you have a delve in the ADF Skinning editor, you'll find that there are selectors for the selectOneChoice when in compact mode (af|selectOneChoice::compact-dropdown-icon-style), however, there is not a selector for the icon in the "normal" mode. I had a quick delve into the skinning source files that you can find in the adf-richclient-impl-11.jar and likewise there seemed to be no association there. However, a quick sample page and a peek with Chrome developer tools revealed the problem.  The af:selectOneChoice gets rendered in the browser as a good old <select> element (reasonable enough!). Herein lies the problem, and the reason why there is no skin selector. The <select> HTML element does not have a standard way of replacing the image used for the dropdown button.  If you have a search around with your favorite search engine, you can find various workarounds and solutions for this.  For example, using Chrome and Safari you can define the following for the select element: select {   -webkit-appearance: listbox;   background-image: url(blob.png);    background-position: center right;   background-repeat: no-repeat;   } Which gives a very exciting select box:  .

    Read the article

  • New article available in "SOA Suite Essentials for WLI Users" series: Dynamic Data Lookup in a Busin

    - by simone.geib
    It is my pleasure to announce the publishing of another article in our "SOA Suite Essentials for WLI Users" series: "Dynamic Data Lookup in a Business Process: Meta Data Cache Control in Oracle WebLogic Integration and Domain Value Maps in SOA Suite". This article explains how dynamic data can be retrieved in a business process using Domain Value Maps in SOA Suite and shows the similarities to the WLI XML MetaData Cache Control. Lots of customers have asked about this comparison and I hope they will find it useful. The article follows "Setting Web Service and JCA Adapter Endpoints Dynamically in Oracle SOA Suite" which describes how web services and JCA adapter endpoints in SOA Suite can be changed at run-time, and so completes the use case where a BPEL process writes to a file (via file adapter) and the output directory and the file name are set dynamically. Please let me know what you think about the series and this specific article.

    Read the article

  • WebCenter Spaces 11g - UI Customization

    - by john.brunswick
    When developing on top of a portal platform to support an intranet or extranet, a portion of the development time is spent adjusting the out-of-box user templates to adjust the look and feel of the platform for your organization. Generally your deployment will not need to look like anything like the sites posted on http://cssremix.com/ or http://www.webcreme.com/, but will meet business needs by adjusting basic elements like navigation, color palate and logo placement. After spending some time doing custom UI development with WebCenter Spaces 11G I have gathered a few tips that I hope can help to speed anyone's efforts to quickly "skin" a WebCenter Spaces deployment. A detailed white paper was released that outlines a technique to quickly update the UI during runtime - http://www.oracle.com/technology/products/webcenter/pdf/owcs_r11120_cust_skins_runtime_wp.pdf. Customizing at "runtime" means using CSS and images to adjust the page layout and feel, which when creatively done can change the pages drastically. WebCenter also allows for detailed templates to manage the placement of major page elements like menus, sidebar, etc, but by adjusting only images and CSS we can end up with something like the custom solution shown below. view large image Let's dive right in and take a look at some tools to make our efforts more efficient.

    Read the article

  • Automatically Create Your Project&rsquo;s NuGet Package Every Time It Builds Via NuGet

    - by deadlydog
    Originally posted on: http://geekswithblogs.net/deadlydog/archive/2013/06/22/automatically-create-your-projectrsquos-nuget-package-every-time-it-builds.aspxSo you’ve got a super awesome library/assembly that you want to share with others, but you’re too lazy to actually use NuGet to package it up and upload it to the gallery; or maybe you don’t know how to create a NuGet package and don’t have the time or desire to learn.  Well, my friends, now this can all be handled for you automatically. Read more at http://blog.danskingdom.com/automatically-create-your-projects-nuget-package-every-time-it-builds-via-nuget/

    Read the article

  • Agile Awakenings and the Rules of Agile

    - by Robert May
    For those that care, you can read my history of management and technology to understand why I think I’m qualified to talk about this at all.  It’s boring, so feel free to skip it. Awakenings I first started to play around with the idea of “agile” in 2004 or 2005.  I found a book on the Rational Unified Process that I thought was good, and attempted to implement parts of it.  I thought I was agile, but really, it wasn’t.   I still didn’t understand the concept of a team.  I still wanted to tell the team what to do and how to get it done.  I still thought I was smarter than the team. After that job, I started work on another project and began helping that team.  The first few months were really rough.  We were implementing Scrum, which was relatively new to everyone on the team, and, quite frankly, I was doing a poor job of it.  I was trying to micro-manage every aspect of the teams work, and we were all miserable. The moment of change came when the senior architect bailed on the project.  His comment to me was: “This isn’t Agile.  Where are the stand-ups?  Where are the stories?”  He was dead on, and I finally woke up.  I finally realized that I was the problem!  I wasn’t trusting the team.  I wasn’t helping the team.  I was being a manager. Like many (most?), I was claiming to be Agile and use Scrum, but I wasn’t in fact following the rules Scrum.  Since then, I’ve done a lot of studying, hands on practice, coaching of many different teams, and other learning around Scrum, and I have discovered that Scrum has some rules that must be followed for success, even though the process is about continuous improvement. I’ve been practicing Scrum right for about 4 years now and have helped multiple teams implement it successfully, so what you’re about to get is based on experience, rather than just theory. The Rules of Scrum In my experience, what I’ve found is that most companies that claim to be doing Scrum or Agile are actually NOT doing either.  This stems largely because they think that they can “adopt the rules of Agile that fit their organization.”  Sadly, many of them think that this means they can adopt iterations (sprints) and not much else.  Either that, or they think they can do whatever they want, or were doing before, and call it Scrum.  This is simply not true. Here are some rules that must be followed for you to really be doing Scrum.  I’ll go into detail on each one of these posts in future blog posts and update links here.  My intent is that this will help other teams implementing scrum to see more success. Agile does not allow you to do whatever you want A Product Owner is required A ScrumMaster is required The team must function as a Team, and QA must be part of the team Support from upper management is required A prioritized product backlog is required A prioritized sprint backlog is required Release planning is required Complete spring planning is required Showcases are required Velocity must be measured Retrospectives are required Daily stand-ups are required Visibility is absolutely required For now, I think that’s enough, although I reserve the right to add more.  If you’re breaking any of these rules, you’re probably not doing Scrum.  There are exceptions to these rules, but until you have practiced Scrum for a while, you don’t know what those exceptions are. Breaking the Rules Many teams break these rules because they are the ones that expose the most pain.  Scrum is not Advil.  It’s not intended to mask the pain, its intended to cure it.  Let me explain that analogy a bit more.  Recently, my 7 year old son broke his arm, quite severely (see the X-Ray to the right).  That caused him a great deal of pain.  We went first to one doctor, and after viewing the X-Ray, they determined that there was no way that they’d cast the arm at their location.  It was simply too bad of a break for them to deal with.  They did, however, give him some Advil for the pain and put a splint on his arm to stabilize the broken bones.  Within minutes, he was feeling much better.  Had we been stupid, we could have gone home and he’d have been just as happy as ever . . . until the pain medication wore off or one of his siblings touched the splint.  Then, all of that pain would come right back to the top.  Sure, he could make it go away by just taking more Advil and moving the splint out of the way, but that wasn’t going to fix the problem permanently. We ended up in an emergency room with a doctor who could fix his arm.  However, we were warned that the fix was going to be VERY painful, and it was.  Even with heavy sedation (Propofol), my son was in enough pain that he squirmed and wiggled trying to get his arm away from the doctor.  He had to endure this pain in order to have a functional arm. But the setting wasn’t the end.  He had to have several casts, had to have it re-broken once, since the first setting didn’t take and finally was given a clean bill of health. Agile implementation is much like this story.  Agile was developed as a result of people recognizing that the development methodologies that were currently in place simply were ineffective.  However, the fix to the broken development that’s been festering for many years is not painless.  Many people start Agile thinking that things will be wonderful.  They won’t!  Agile is about visibility, and often, it brings great pain to surface.  It causes all of the missed deadlines, the cowboy coders, the coasters, the micro-managers, the lazy, and all of the other problems that are really part of your development process now to become painfully visible to EVERYONE.  Many people don’t like this exposure.  Agile will make the pain better, but not if you remove the cast (the rules above) prematurely and start breaking the rules that expose the most pain.  The healing will take time and is not instant (like Advil).  Figuring out what the true source of pain and fixing it is very valuable to you, your team, and your company.  Remember as you’re doing this that Agile isn’t the source of the pain, it’s really just exposing it.  Find the source. My recommendation is that ALL of these rules are followed for a minimum of six months, and preferably for an entire year, before you decide to break any of these rules.  Get a few good releases under your belt.  Figure out what your velocity is and start firing as a team.  Chances are, after you see agile really in action, you won’t want to break the rules because you’ll see their value. More Reading Jean Tabaka recently published a list of 78 Things I Have Learned in 6 Years of Agile Coaching.  Highly recommended. Technorati Tags: Agile,Scrum,Rules

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >