Search Results

Search found 32994 results on 1320 pages for 'second level cache'.

Page 414/1320 | < Previous Page | 410 411 412 413 414 415 416 417 418 419 420 421  | Next Page >

  • Analysing and measuring the performance of a .NET application (survey results)

    - by Laila
    Back in December last year, I asked myself: could it be that .NET developers think that you need three days and a PhD to do performance profiling on their code? What if developers are shunning profilers because they perceive them as too complex to use? If so, then what method do they use to measure and analyse the performance of their .NET applications? Do they even care about performance? So, a few weeks ago, I decided to get a 1-minute survey up and running in the hopes that some good, hard data would clear the matter up once and for all. I posted the survey on Simple Talk and got help from a few people to promote it. The survey consisted of 3 simple questions: Amazingly, 533 developers took the time to respond - which means I had enough data to get representative results! So before I go any further, I would like to thank all of you who contributed, because I now have some pretty good answers to the troubling questions I was asking myself. To thank you properly, I thought I would share some of the results with you. First of all, application performance is indeed important to most of you. In fact, performance is an intrinsic part of the development cycle for a good 40% of you, which is much higher than I had anticipated, I have to admit. (I know, "Have a little faith Laila!") When asked what tool you use to measure and analyse application performance, I found that nearly half of the respondents use logging statements, a third use performance counters, and 70% of respondents use a profiler of some sort (a 3rd party performance profilers, the CLR profiler or the Visual Studio profiler). The importance attributed to logging statements did surprise me a little. I am still not sure why somebody would go to the trouble of manually instrumenting code in order to measure its performance, instead of just using a profiler. I personally find the process of annotating code, calculating times from log files, and relating it all back to your source terrifyingly laborious. Not to mention that you then need to remember to turn it all off later! Even when you have logging in place throughout all your code anyway, you still have a fair amount of potentially error-prone calculation to sift through the results; in addition, you'll only get method-level rather than line-level timings, and you won't get timings from any framework or library methods you don't have source for. To top it all, we all know that bottlenecks are rarely where you would expect them to be, so you could be wasting time looking for a performance problem in the wrong place. On the other hand, profilers do all the work for you: they automatically collect the CPU and wall-clock timings, and present the results from method timing all the way down to individual lines of code. Maybe I'm missing a trick. I would love to know about the types of scenarios where you actively prefer to use logging statements. Finally, while a third of the respondents didn't have a strong opinion about code performance profilers, those who had an opinion thought that they were mainly complex to use and time consuming. Three respondents in particular summarised this perfectly: "sometimes, they are rather complex to use, adding an additional time-sink to the process of trying to resolve the existing problem". "they are simple to use, but the results are hard to understand" "Complex to find the more advanced things, easy to find some low hanging fruit". These results confirmed my suspicions: Profilers are seen to be designed for more advanced users who can use them effectively and make sense of the results. I found yet more interesting information when I started comparing samples of "developers for whom performance is an important part of the dev cycle", with those "to whom performance is only looked at in times of crisis", and "developers to whom performance is not important, as long as the app works". See the three graphs below. Sample of developers to whom performance is an important part of the dev cycle: Sample of developers to whom performance is important only in times of crisis: Sample of developers to whom performance is not important, as long as the app works: As you can see, there is a strong correlation between the usage of a profiler and the importance attributed to performance: indeed, the more important performance is to a development team, the more likely they are to use a profiler. In addition, developers to whom performance is an important part of the dev cycle have a higher tendency to use a much wider range of methods for performance measurement and analysis. And, unsurprisingly, the less important performance is, the less varied the methods of measurement are. So all in all, to come back to my random questions: .NET developers do care about performance. Those who care the most use a wider range of performance measurement methods than those who care less. But overall, logging statements, performance counters and third party performance profilers are the performance measurement methods of choice for most developers. Finally, although most of you find code profilers complex to use, those of you who care the most about performance tend to use profilers more than those of you to whom performance is not so important.

    Read the article

  • How do I prove or disprove "god" objects are wrong?

    - by honestduane
    Problem Summary: Long story short, I inherited a code base and an development team I am not allowed to replace and the use of God Objects is a big issue. Going forward, I want to have us re-factor things but I am getting push-back from the teams who want to do everything with God Objects "because its easier" and this means I would not be allowed to re-factor. I pushed back citing my years of dev experience, that I'm the new boss who was hired to know these things, etc, and so did the third party offshore companies account sales rep, and this is now at the executive level and my meeting is tomorrow and I want to go in with a lot of technical ammo to advocate best practices because I feel it will be cheaper in the long run (And I personally feel that is what the third party is worried about) for the company. My issue is from a technical level, I know its good long term but I'm having trouble with the ultra short term and 6 months term, and while its something I "know" I cant prove it with references and cited resources outside of one person (Robert C. Martin, aka Uncle Bob), as that is what I am being asked to do as I have been told having data from one person and only one person (Robert C Martin) is not good enough of an argument. Question: What are some resources I can cite directly (Title, year published, page number, quote) by well known experts in the field that explicitly say this use of "God" Objects/Classes/Systems is bad (or good, since we are looking for the most technically valid solution)? Research I have already done: I have a number of books here and I have searched their indexes for the use of the words "god object" and "god class". I found that oddly its almost never used and the copy of the GoF book I have for example, never uses it (At least according to the index in front of me) but I have found it in 2 books per the below, but I want more I can use. I checked the Wikipedia page for "God Object" and its currently a stub with little reference links so although I personally agree with that it says, It doesn't have much I can use in an environment where personal experience is not considered valid. The book cited is also considered too old to be valid by the people I am debating these technical points with as the argument they are making is that "it was once thought to be bad but nobody could prove it, and now modern software says "god" objects are good to use". I personally believe that this statement is incorrect, but I want to prove the truth, whatever it is. In Robert C Martin's "Agile Principles, Patterns, and Practices in C#" (ISBN: 0-13-185725-8, hardcover) where on page 266 it states "Everybody knows that god classes are a bad idea. We don't want to concentrate all the intelligence of a system into a single object or a single function. One of the goals of OOD is the partitioning and distribution of behavior into many classes and many function." -- And then goes on to say sometimes its better to use God Classes anyway sometimes (Citing micro-controllers as an example). In Robert C Martin's "Clean Code: A Handbook of Agile Software Craftsmanship" page 136 (And only this page) talks about the "God class" and calls it out as a prime example of a violation of the "classes should be small" rule he uses to promote the Single Responsibility Principle" starting on on page 138. The problem I have is all my references and citations come from the same person (Robert C. Martin), and am from the same single person/source. I am being told that because he is just one guy, my desire to not use "God Classes" is invalid and not accepted as a standard best practice in the software industry. Is this true? Am I doing things wrong from a technical perspective by trying to keep to the teaching of Uncle Bob? God Objects and Object Oriented Programming and Design: The more I think of this the more I think this is more something you learn when you study OOP and its never explicitly called out; Its implicit to good design is my thinking (Feel free to correct me, please, as I want to learn), The problem is I "know" this, but but not everybody does, so in this case its not considered a valid argument because I am effectively calling it out as universal truth when in fact most people are statistically ignorant of it since statistically most people are not programmers. Conclusion: I am at a loss on what to search for to get the best additional results to cite, since they are making a technical claim and I want to know the truth and be able to prove it with citations like a real engineer/scientist, even if I am biased against god objects due to my personal experience with code that used them. Any assistance or citations would be deeply appreciated.

    Read the article

  • .NET Security Part 2

    - by Simon Cooper
    So, how do you create partial-trust appdomains? Where do you come across them? There are two main situations in which your assembly runs as partially-trusted using the Microsoft .NET stack: Creating a CLR assembly in SQL Server with anything other than the UNSAFE permission set. The permissions available in each permission set are given here. Loading an assembly in ASP.NET in any trust level other than Full. Information on ASP.NET trust levels can be found here. You can configure the specific permissions available to assemblies using ASP.NET policy files. Alternatively, you can create your own partially-trusted appdomain in code and directly control the permissions and the full-trust API available to the assemblies you load into the appdomain. This is the scenario I’ll be concentrating on in this post. Creating a partially-trusted appdomain There is a single overload of AppDomain.CreateDomain that allows you to specify the permissions granted to assemblies in that appdomain – this one. This is the only call that allows you to specify a PermissionSet for the domain. All the other calls simply use the permissions of the calling code. If the permissions are restricted, then the resulting appdomain is referred to as a sandboxed domain. There are three things you need to create a sandboxed domain: The specific permissions granted to all assemblies in the domain. The application base (aka working directory) of the domain. The list of assemblies that have full-trust if they are loaded into the sandboxed domain. The third item is what allows us to have a fully-trusted API that is callable by partially-trusted code. I’ll be looking at the details of this in a later post. Granting permissions to the appdomain Firstly, the permissions granted to the appdomain. This is encapsulated in a PermissionSet object, initialized either with no permissions or full-trust permissions. For sandboxed appdomains, the PermissionSet is initialized with no permissions, then you add permissions you want assemblies loaded into that appdomain to have by default: PermissionSet restrictedPerms = new PermissionSet(PermissionState.None); // all assemblies need Execution permission to run at all restrictedPerms.AddPermission( new SecurityPermission(SecurityPermissionFlag.Execution)); // grant general read access to C:\config.xml restrictedPerms.AddPermission( new FileIOPermission(FileIOPermissionAccess.Read, @"C:\config.xml")); // grant permission to perform DNS lookups restrictedPerms.AddPermission( new DnsPermission(PermissionState.Unrestricted)); It’s important to point out that the permissions granted to an appdomain, and so to all assemblies loaded into that appdomain, are usable without needing to go through any SafeCritical code (see my last post if you’re unsure what SafeCritical code is). That is, partially-trusted code loaded into an appdomain with the above permissions (and so running under the Transparent security level) is able to create and manipulate a FileStream object to read from C:\config.xml directly. It is only for operations requiring permissions that are not granted to the appdomain that partially-trusted code is required to call a SafeCritical method that then asserts the missing permissions and performs the operation safely on behalf of the partially-trusted code. The application base of the domain This is simply set as a property on an AppDomainSetup object, and is used as the default directory assemblies are loaded from: AppDomainSetup appDomainSetup = new AppDomainSetup { ApplicationBase = @"C:\temp\sandbox", }; If you’ve read the documentation around sandboxed appdomains, you’ll notice that it mentions a security hole if this parameter is set correctly. I’ll be looking at this, and other pitfalls, that will break the sandbox when using sandboxed appdomains, in a later post. Full-trust assemblies in the appdomain Finally, we need the strong names of the assemblies that, when loaded into the appdomain, will be run as full-trust, irregardless of the permissions specified on the appdomain. These assemblies will contain methods and classes decorated with SafeCritical and Critical attributes. I’ll be covering the details of creating full-trust APIs for partial-trust appdomains in a later post. This is how you get the strongnames of an assembly to be executed as full-trust in the sandbox: // get the Assembly object for the assembly Assembly assemblyWithApi = ... // get the StrongName from the assembly's collection of evidence StrongName apiStrongName = assemblyWithApi.Evidence.GetHostEvidence<StrongName>(); Creating the sandboxed appdomain So, putting these three together, you create the appdomain like so: AppDomain sandbox = AppDomain.CreateDomain( "Sandbox", null, appDomainSetup, restrictedPerms, apiStrongName); You can then load and execute assemblies in this appdomain like any other. For example, to load an assembly into the appdomain and get an instance of the Sandboxed.Entrypoint class, implementing IEntrypoint, you do this: IEntrypoint o = (IEntrypoint)sandbox.CreateInstanceFromAndUnwrap( "C:\temp\sandbox\SandboxedAssembly.dll", "Sandboxed.Entrypoint"); // call method the Execute method on this object within the sandbox o.Execute(); The second parameter to CreateDomain is for security evidence used in the appdomain. This was a feature of the .NET 2 security model, and has been (mostly) obsoleted in the .NET 4 model. Unless the evidence is needed elsewhere (eg. isolated storage), you can pass in null for this parameter. Conclusion That’s the basics of sandboxed appdomains. The most important object is the PermissionSet that defines the permissions available to assemblies running in the appdomain; it is this object that defines the appdomain as full or partial-trust. The appdomain also needs a default directory used for assembly lookups as the ApplicationBase parameter, and you can specify an optional list of the strongnames of assemblies that will be given full-trust permissions if they are loaded into the sandboxed appdomain. Next time, I’ll be looking closer at full-trust assemblies running in a sandboxed appdomain, and what you need to do to make an API available to partial-trust code.

    Read the article

  • Cannot set a credential for principal 'sa'

    - by hailey
    I was trying to change the SA password on my development server this morning and got an error. Msg 15535, Level 16, State 1, Line 1 Cannot set a credential for principal 'sa'. It was a little frustrating to get an error for a seemingly simple task but then agian maybe I screwed something up.  After doing a couple of searches i found a Microsoft KB (support.microsoft.com/kb/956177) "You receive an exception in SQL Server 2008 when you try to modify the properties of the SQL Server Administrator account by using SQL Server Management Studio".  It was for SQL 2008 but it worked for my SQL 2005 sp3 server just fine.  You have to click the Map to Credential check box but you don't have to add any credetials just click the OK button to complete and that's it.

    Read the article

  • design of 'game engine' for small javascript games?

    - by Matt Ball
    I'm making a group of two or three simple javascript games for fun. After someone finishes one game, they'll be presented with a harder or easier version of another game depending on whether the original game was won or lost. I have a high-level question about the design of things: So far I've created a class for one game type that manages the interaction with the UI and the state of the game itself. But for tracking how many of the subgames have been won, or for understanding whether the next game presented should be more or less difficult, are there arguments to be made for making a 'game engine' class? How does the engine communicate to the games? For instance, when a game is won, how is that information relayed to the engine? Is there a better or more common design? (If you want to see what I have so far, the games are slowly taking shape here: https://github.com/yosemitebandit/candela and can be viewed at http://yosemitebandit.com/candela)

    Read the article

  • Recommendation for a platform to teach game development to kids [closed]

    - by Moshe Kravchik
    My 11 years old son decided he want to create a Web game. He does not know much about programming and I never did any game development so I'm not the best teacher of the topic. On one hand I really want him to get into building things and doing his own game sounds perfect. So I'd like to find a way to keep him interested and progressing by himself. This means that I'm looking for a platform that is: 1. Simple for understanding and use, intuitive interface 2. Powerful 3. Good tools, preferrably free 4. Significant community for questions and tips 5. Localization - my son's English is quite poor (native Hebrew). We looked at Alice, but it was too limited in its abilities and isn't really a Web game building platform. HTML/CSS and Javascript - too low level for a kid to keep the interest. What would you recommend?

    Read the article

  • New Whitepaper: Oracle E-Business Suite on Exadata

    - by Steven Chan
    Our Maximum Availability Architecture (MAA) team has quietly been amassing a formidable set of whitepapers about the Oracle Exadata Database Machine.  They're available here:MAA Best Practices - Exadata Database MachineIf you're one of the lucky ones with access to this hardware platform, you'll be pleased to hear that the MAA team has just published a new whitepaper with best practices for EBS environments:Oracle E-Business Suite on ExadataThis whitepaper covers the following topics:Getting to Exadata -- a high level overview of fresh installation on, and migration to, Exadata Database Machine with pointers to more detailed documentation High Availability and Disaster Recovery -- an overview of our MAA best practices with pointers to our detailed MAA Best Practices documentation Performance and Scalability -- best practices for running Oracle E-Business Suite on Exadata Database Machine based on our internal testing

    Read the article

  • How to tell if SPARC T4 crypto is being used?

    - by danx
    A question that often comes up when running applications on SPARC T4 systems is "How can I tell if hardware crypto accleration is being used?" To review, the SPARC T4 processor includes a crypto unit that supports several crypto instructions. For hardware crypto these include 11 AES instructions, 4 xmul* instructions (for AES GCM carryless multiply), mont for Montgomery multiply (optimizes RSA and DSA), and 5 des_* instructions (for DES3). For hardware hash algorithm optimization, the T4 has the md5, sha1, sha256, and sha512 instructions (the last two are used for SHA-224 an SHA-384). First off, it's easy to tell if the processor T4 crypto instructions—use the isainfo -v command and look for "sparcv9" and "aes" (and other hash and crypto algorithms) in the output: $ isainfo -v 64-bit sparcv9 applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia kasumi des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc These instructions are not-privileged, so are available for direct use in user-level applications and libraries (such as OpenSSL). Here is the "openssl speed -evp" command shown with the built-in t4 engine and with the pkcs11 engine. Both run the T4 AES instructions, but the t4 engine is faster than the pkcs11 engine because it has less overhead (especially for smaller packet sizes): t-4 $ /usr/bin/openssl version OpenSSL 1.0.0j 10 May 2012 t-4 $ /usr/bin/openssl engine (t4) SPARC T4 engine support (dynamic) Dynamic engine loading support (pkcs11) PKCS #11 engine support t-4 $ /usr/bin/openssl speed -evp aes-128-cbc # t4 engine used by default . . . The 'numbers' are in 1000s of bytes per second processed. type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 487777.10k 816822.21k 986012.59k 1017029.97k 1053543.08k t-4 $ /usr/bin/openssl speed -engine pkcs11 -evp aes-128-cbc engine "pkcs11" set. . . . The 'numbers' are in 1000s of bytes per second processed. type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 31703.58k 116636.39k 350672.81k 696170.50k 993599.49k Note: The "-evp" flag indicates use the OpenSSL "EnVeloPe" API, which gives more accurate results. That's because it tells OpenSSL to use the same API that external programs use when calling OpenSSL libcrypto functions, evp(3openssl). DTrace Shows if T4 Crypto Functions Are Used OK, good enough, the isainfo(1) command shows the instructions are present, but how does one know if they are being used? Chi-Chang Lin, who works on Oracle Solaris performance, wrote a Dtrace script to show if T4 instructions are being executed. To show the T4 instructions are being used, run the following Dtrace script. Look for functions named "t4" and "yf" in the output. The OpenSSL T4 engine uses functions named "t4" and the PKCS#11 engine uses functions named "yf". To demonstrate, I'll first run "openssl speed" with the built-in t4 engine then with the pkcs11 engine. The performance numbers are not valid due to dtrace probes slowing things down. t-4 # dtrace -Z -n ' pid$target::*yf*:entry,pid$target::*t4_*:entry{ @[probemod, probefunc] = count();}' \ -c "/usr/bin/openssl speed -evp aes-128-cbc" dtrace: description 'pid$target::*yf*:entry' matched 101 probes . . . dtrace: pid 2029 has exited libcrypto.so.1.0.0 ENGINE_load_t4 1 libcrypto.so.1.0.0 t4_DH 1 libcrypto.so.1.0.0 t4_DSA 1 libcrypto.so.1.0.0 t4_RSA 1 libcrypto.so.1.0.0 t4_destroy 1 libcrypto.so.1.0.0 t4_free_aes_ctr_NIDs 1 libcrypto.so.1.0.0 t4_init 1 libcrypto.so.1.0.0 t4_add_NID 3 libcrypto.so.1.0.0 t4_aes_expand128 5 libcrypto.so.1.0.0 t4_cipher_init_aes 5 libcrypto.so.1.0.0 t4_get_all_ciphers 6 libcrypto.so.1.0.0 t4_get_all_digests 59 libcrypto.so.1.0.0 t4_digest_final_sha1 65 libcrypto.so.1.0.0 t4_digest_init_sha1 65 libcrypto.so.1.0.0 t4_sha1_multiblock 126 libcrypto.so.1.0.0 t4_digest_update_sha1 261 libcrypto.so.1.0.0 t4_aes128_cbc_encrypt 1432979 libcrypto.so.1.0.0 t4_aes128_load_keys_for_encrypt 1432979 libcrypto.so.1.0.0 t4_cipher_do_aes_128_cbc 1432979 t-4 # dtrace -Z -n 'pid$target::*yf*:entry{ @[probemod, probefunc] = count();}   pid$target::*yf*:entry,pid$target::*t4_*:entry{ @[probemod, probefunc] = count();}' \ -c "/usr/bin/openssl speed -engine pkcs11 -evp aes-128-cbc" dtrace: description 'pid$target::*yf*:entry' matched 101 probes engine "pkcs11" set. . . . dtrace: pid 2033 has exited libcrypto.so.1.0.0 ENGINE_load_t4 1 libcrypto.so.1.0.0 t4_DH 1 libcrypto.so.1.0.0 t4_DSA 1 libcrypto.so.1.0.0 t4_RSA 1 libcrypto.so.1.0.0 t4_destroy 1 libcrypto.so.1.0.0 t4_free_aes_ctr_NIDs 1 libcrypto.so.1.0.0 t4_get_all_ciphers 1 libcrypto.so.1.0.0 t4_get_all_digests 1 libsoftcrypto.so.1 rijndael_key_setup_enc_yf 1 libsoftcrypto.so.1 yf_aes_expand128 1 libcrypto.so.1.0.0 t4_add_NID 3 libsoftcrypto.so.1 yf_aes128_cbc_encrypt 1542330 libsoftcrypto.so.1 yf_aes128_load_keys_for_encrypt 1542330 So, as shown above the OpenSSL built-in t4 engine executes t4_* functions (which are hand-coded assembly executing the T4 AES instructions) and the OpenSSL pkcs11 engine executes *yf* functions. Programmatic Use of OpenSSL T4 engine The OpenSSL t4 engine is used automatically with the /usr/bin/openssl command line. Chi-Chang Lin also points out that if you're calling the OpenSSL API (libcrypto.so) from a program, you must call ENGINE_load_built_engines(), otherwise the built-in t4 engine will not be loaded. You do not call ENGINE_set_default(). That's because "openssl speed -evp" test calls ENGINE_load_built_engines() even though the "-engine" option wasn't specified. OpenSSL T4 engine Availability The OpenSSL t4 engine is available with Solaris 11 and 11.1. For Solaris 10 08/11 (U10), you need to use the OpenSSL pkcs311 engine. The OpenSSL t4 engine is distributed only with the version of OpenSSL distributed with Solaris (and not third-party or self-compiled versions of OpenSSL). The OpenSSL engine implements the AES cipher for Solaris 11, released 11/2011. For Solaris 11.1, released 11/2012, the OpenSSL engine adds optimization for the MD5, SHA-1, and SHA-2 hash algorithms, and DES-3. Although the T4 processor has Camillia and Kasumi block cipher instructions, these are not implemented in the OpenSSL T4 engine. The following charts may help view availability of optimizations. The first chart shows what's available with Solaris CLIs and APIs, the second chart shows what's available in Solaris OpenSSL. Native Solaris Optimization for SPARC T4 This table is shows Solaris native CLI and API support. As such, they are all available with the OpenSSL pkcs11 engine. CLIs: "openssl -engine pkcs11", encrypt(1), decrypt(1), mac(1), digest(1), MD5sum(1), SHA1sum(1), SHA224sum(1), SHA256sum(1), SHA384sum(1), SHA512sum(1) APIs: PKCS#11 library libpkcs11(3LIB) (incluDES Openssl pkcs11 engine), libMD(3LIB), and Solaris kernel modules AlgorithmSolaris 1008/11 (U10)Solaris 11Solaris 11.1 AES-ECB, AES-CBC, AES-CTR, AES-CBC AES-CFB128 XXX DES3-ECB, DES3-CBC, DES2-ECB, DES2-CBC, DES-ECB, DES-CBC XXX bignum Montgomery multiply (RSA, DSA) XXX MD5, SHA-1, SHA-256, SHA-384, SHA-512 XXX SHA-224 X ARCFOUR (RC4) X Solaris OpenSSL T4 Engine Optimization This table is for the Solaris OpenSSL built-in t4 engine. Algorithms listed above are also available through the OpenSSL pkcs11 engine. CLI: openssl(1openssl) APIs: openssl(5), engine(3openssl), evp(3openssl), libcrypto crypto(3openssl) AlgorithmSolaris 11Solaris 11SRU2Solaris 11.1 AES-ECB, AES-CBC, AES-CTR, AES-CBC AES-CFB128 XXX DES3-ECB, DES3-CBC, DES-ECB, DES-CBC X bignum Montgomery multiply (RSA, DSA) X MD5, SHA-1, SHA-256, SHA-384, SHA-512 XX SHA-224 X Source Code Availability Solaris Most of the T4 assembly code that called the new T4 crypto instructions was written by Ferenc Rákóczi of the Solaris Security group, with assistance from others. You can download the Solaris source for this and other parts of Solaris as a few zip files at the Oracle Download website. The relevant source files are generally under directories usr/src/common/crypto/{aes,arcfour,des,md5,modes,sha1,sha2}}/sun4v/. and usr/src/common/bignum/sun4v/. Solaris 11 binary is available from the Oracle Solaris 11 download website. OpenSSL t4 engine The source for the OpenSSL t4 engine, which is based on the Solaris source above, is viewable through the OpenGrok source code browser in directory src/components/openssl/openssl-1.0.0/engines/t4 . You can download the source from the same website or through Mercurial source code management, hg(1). Conclusion Oracle Solaris with SPARC T4 provides a rich set of accelerated cryptographic and hash algorithms. Using the latest update, Solaris 11.1, provides the best set of optimized algorithms, but alternatives are often available, sometimes slightly slower, for releases back to Solaris 10 08/11 (U10). Reference See also these earlier blogs. SPARC T4 OpenSSL Engine by myself, Dan Anderson (2011), discusses the Openssl T4 engine and reviews the SPARC T4 processor for the Solaris 11 release. Exciting Crypto Advances with the T4 processor and Oracle Solaris 11 by Valerie Fenwick (2011) discusses crypto algorithms that were optimized for the T4 processor with the Solaris 11 FCS (11/11) and Solaris 10 08/11 (U10) release. T4 Crypto Cheat Sheet by Stefan Hinker (2012) discusses how to make T4 crypto optimization available to various consumers (such as SSH, Java, OpenSSL, Apache, etc.) High Performance Security For Oracle Database and Fusion Middleware Applications using SPARC T4 (PDF, 2012) discusses SPARC T4 and its usage to optimize application security. Configuring Oracle iPlanet WebServer / Oracle Traffic Director to use crypto accelerators on T4-1 servers by Meena Vyas (2012)

    Read the article

  • What does "enterprise" means in relation to software architecture?

    - by SkonJeet
    I see the term "enterprise" being thrown around software developers and programmers a lot and used loosely it seems. en·ter·prise/'ent?r?priz/ Noun: A project or undertaking, typically one that is difficult or requires effort. Initiative and resourcefulness. Can someone please clarify what this term actually encompasses? "At an enterprise level", "enterprise scale"? There are even "enterprise editions" of things. What exactly does it mean? It obviously doesn't make sense judging by the above definition so more specifically to software what does one mean when using the word enterprise? EDIT: To add a spin on this - how does this term then fit into phrases such as Enterprise Framework Model? What does data access and data context have to do with company-wide descriptions?

    Read the article

  • SQL Server 2012 edition comparison details are published

    - by DavidWimbush
    Interesting stuff, particularly if you're doing BI. BISM tabular and Power View will not be in Standard Edition, only in the new - presumably more expensive - Business Intelligence Edition. That kind of makes sense as you need a fairly pricey edition of SharePoint to really get all the benefits, but it's a shame there won't be some kind of limited version in Standard Edition. And Always On will be in Standard Edition but limited to 2 nodes. I really expected Always On to be Enterprise-only so this is a great decision. It allows those of us working at a more modest scale to benefit and raises the fault tolerance of SQL Server as a product to a new level.Read all about it here: http://www.microsoft.com/sqlserver/en/us/future-editions/sql2012-editions.aspx

    Read the article

  • Pac-Man Hiding Spot Makes High Scores a Snap

    - by Jason Fitzpatrick
    This interesting bug (feature?) in the original Pac-Man game makes it easy to hide from the ghosts, ensuring a long-lived and well-fed Pac-Man. Check out the video above to see the black hole you can park Pac-Man in to avoid assault by the ghosts. There’s two big caveats with this trick: first, it only works in the original game (spin offs and modern adaptations won’t necessarily have it but the original machine and MAME implementations of it will). Second, it doesn’t work if the ghosts see you park yourself there; you need to slip into the spot our of their direct line of sight. Still craving more Pac-Man goodness? Check out these cheat maps that map out all the patterns you need to follow to sneak through every level unmolested by ghosts. [via Neatorama] How To Be Your Own Personal Clone Army (With a Little Photoshop) How To Properly Scan a Photograph (And Get An Even Better Image) The HTG Guide to Hiding Your Data in a TrueCrypt Hidden Volume

    Read the article

  • Experience formula with javascript

    - by StealingMana
    I'm having trouble working out a formula using this experience curve to get the total exp after each level. I bet its easy and im just over thinking it. maxlvl = 10; increment = 28; baseexp = 100; function calc(){ for (i = 0;i<(maxlvl*increment);i+=increment){ expperlvl = baseexp + i; document.writeln(expperlvl); } } I figured it out. maxlvl=6; base=200; increment=56; function total(){ totalxp= (base*(maxlvl-1))+(increment*(maxlvl-2)*(maxlvl-1)/2); document.write(totalxp); }

    Read the article

  • GDD-BR 2010 [2E] Building Business Apps using Google Web Toolkit and Spring Roo

    GDD-BR 2010 [2E] Building Business Apps using Google Web Toolkit and Spring Roo Speaker: Chris Ramsdale Track: Cloud Computing Time: 14:40 - 15:25 Room: sala[2] Level: 201 Who says you can't build rich web apps for your business? Follow along in this session to learn how you can use the latest integrated set of tools from Google and VMware to take your internal business apps into the cloud. We'll cover how to get started using GWT with Spring Roo and SpringSource Tool Suite (STS), as well as the new data presentation widgets and MVP framework that will be available in the 2.1 release of GWT. From: GoogleDevelopers Views: 69 0 ratings Time: 45:56 More in Science & Technology

    Read the article

  • Linux multiplayer server, need some help.

    - by Viktor
    I need to write a server for a game, which is closer to action type game, so needs fast communication. There must be only one server, I'll just split the world in zones, but this is not the question. Client will be written in java using jMonkeyEngine. In my opinion I should write the server in java. I don't want to implement any low level features such as communication, reliable udp, etc. Can you suggest any java libraries that already implement this? Or maybe there is more suitable languages to implement this project (must run on linux)?

    Read the article

  • MySQL Connect Only 10 Days Away - Focus on InnoDB Sessions

    - by Bertrand Matthelié
    Time flies and MySQL Connect is only 10 days away! You can check out the full program here as well as in the September edition of the MySQL newsletter. Mat recently blogged about the MySQL Cluster sessions you’ll have the opportunity to attend, and below are those focused on InnoDB. Remember you can plan your schedule with Schedule Builder. Saturday, 1.00 pm, Room Golden Gate 3: 10 Things You Should Know About InnoDB—Calvin Sun, Oracle InnoDB is the default storage engine for Oracle’s MySQL as of MySQL Release 5.5. It provides the standard ACID-compliant transactions, row-level locking, multiversion concurrency control, and referential integrity. InnoDB also implements several innovative technologies to improve its performance and reliability. This presentation gives a brief history of InnoDB; its main features; and some recent enhancements for better performance, scalability, and availability. Saturday, 5.30 pm, Room Golden Gate 4: Demystified MySQL/InnoDB Performance Tuning—Dimitri Kravtchuk, Oracle This session covers performance tuning with MySQL and the InnoDB storage engine for MySQL and explains the main improvements made in MySQL Release 5.5 and Release 5.6. Which setting for which workload? Which value will be better for my system? How can I avoid potential bottlenecks from the beginning? Do I need a purge thread? Is it true that InnoDB doesn't need thread concurrency anymore? These and many other questions are asked by DBAs and developers. Things are changing quickly and constantly, and there is no “silver bullet.” But understanding the configuration setting’s impact is already a huge step in performance improvement. Bring your ideas and problems to share them with others—the discussion is open, just moderated by a speaker. Sunday, 10.15 am, Room Golden Gate 4: Better Availability with InnoDB Online Operations—Calvin Sun, Oracle Many top Web properties rely on Oracle’s MySQL as a critical piece of infrastructure for serving millions of users. Database availability has become increasingly important. One way to enhance availability is to give users full access to the database during data definition language (DDL) operations. The online DDL operations in recent MySQL releases offer users the flexibility to perform schema changes while having full access to the database—that is, with minimal delay of operations on a table and without rebuilding the entire table. These enhancements provide better responsiveness and availability in busy production environments. This session covers these improvements in the InnoDB storage engine for MySQL for online DDL operations such as add index, drop foreign key, and rename column. Sunday, 11.45 am, Room Golden Gate 7: Developing High-Throughput Services with NoSQL APIs to InnoDB and MySQL Cluster—Andrew Morgan and John Duncan, Oracle Ever-increasing performance demands of Web-based services have generated significant interest in providing NoSQL access methods to MySQL (MySQL Cluster and the InnoDB storage engine of MySQL), enabling users to maintain all the advantages of their existing relational databases while providing blazing-fast performance for simple queries. Get the best of both worlds: persistence; consistency; rich SQL queries; high availability; scalability; and simple, flexible APIs and schemas for agile development. This session describes the memcached connectors and examines some use cases for how MySQL and memcached fit together in application architectures. It does the same for the newest MySQL Cluster native connector, an easy-to-use, fully asynchronous connector for Node.js. Sunday, 1.15 pm, Room Golden Gate 4: InnoDB Performance Tuning—Inaam Rana, Oracle The InnoDB storage engine has always been highly efficient and includes many unique architectural elements to ensure high performance and scalability. In MySQL 5.5 and MySQL 5.6, InnoDB includes many new features that take better advantage of recent advances in operating systems and hardware platforms than previous releases did. This session describes unique InnoDB architectural elements for performance, new features, and how to tune InnoDB to achieve better performance. Sunday, 4.15 pm, Room Golden Gate 3: InnoDB Compression for OLTP—Nizameddin Ordulu, Facebook and Inaam Rana, Oracle Data compression is an important capability of the InnoDB storage engine for Oracle’s MySQL. Compressed tables reduce the size of the database on disk, resulting in fewer reads and writes and better throughput by reducing the I/O workload. Facebook pushes the limit of InnoDB compression and has made several enhancements to InnoDB, making this technology ready for online transaction processing (OLTP). In this session, you will learn the fundamentals of InnoDB compression. You will also learn the enhancements the Facebook team has made to improve InnoDB compression, such as reducing compression failures, not logging compressed page images, and allowing changes of compression level. Not registered yet? You can still save US$ 300 over the on-site fee – Register Now!

    Read the article

  • Remove multiple trailing slashes in a single 301 in .htaccess?

    - by Jakobud
    There is a similar question here, but the solution does not work in Apache for our site. I'm trying to remove multiple trailing slashes from URLs on our site. I found some .htaccess code that seems to work: RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_URI} ^(.*)//(.*)$ RewriteRule . %1/%2 [R=301,L] This rule removes multiple slashes from anywhere in the URL: http://www.mysite.com/category/accessories//// becomes http://www.mysite.com/category/accessories/ However, it redirects once for every extra slash. So: http://www.mysite.com/category/accessories/////// 301 Redirects to http://www.mysite.com/category/accessories////// 301 Redirects to http://www.mysite.com/category/accessories///// 301 Redirects to http://www.mysite.com/category/accessories//// 301 Redirects to http://www.mysite.com/category/accessories/// 301 Redirects to http://www.mysite.com/category/accessories// 301 Redirects to http://www.mysite.com/category/accessories/ Is it possible to rewrite this rule so that it does it all in a single 301 redirect? Also, this above directive does not work at the root level of our site: http://www.mysite.com///// does not redirect but it should.

    Read the article

  • GDD-BR 2010 [2F] Storage, Bigquery and Prediction APIs

    GDD-BR 2010 [2F] Storage, Bigquery and Prediction APIs Speaker: Patrick Chanezon Track: Cloud Computing Time slot: F [15:30 - 16:15] Room: 2 Level: 101 Google is expanding our storage products by introducing Google Storage for Developers. It offers a RESTful API for storing and accessing data at Google. Developers can take advantage of the performance and reliability of Google's storage infrastructure, as well as the advanced security and sharing capabilities. We will demonstrate key functionality of the product as well as customer use cases. Google relies heavily on data analysis and has developed many tools to understand large datasets. Two of these tools are now available on a limited sign-up basis to developers: (1) BigQuery: interactive analysis of very large data sets and (2) Prediction API: make informed predictions from your data. We will demonstrate their use and give instructions on how to get access. From: GoogleDevelopers Views: 1 0 ratings Time: 39:27 More in Science & Technology

    Read the article

  • Getting a solid understanding of Linux fundamentals

    - by JoshEarl
    I'm delving into the Linux world again as a diversion from my Microsoft-centric day job, and every time I tackle a new project I find it a frustrating exercise in trial and error. One thing that I always try to do when learning something new is figure out what the big pieces are and how they work together. I haven't yet come across a resource that explains Linux at this level. Resources seem to be either aimed at the barely computer literate crowd (Linux doesn't bite. Promise!) or the just compile the kernel and make your own distro crowd. I'm looking for a "JavaScript: The Good Parts" type of road map that doesn't necessarily answer all my questions so much as help me understand what questions I need to be asking. Any suggestions?

    Read the article

  • How to code Umbraco XSLT to retrieve Nodes from unrelated tree

    - by Phil.Wheeler
    I have an Umbraco site for personal use that I want to also use as a blog. I'm trying to put together the XSLT to grab the top three posts from the nodes in the Blog tree (node id = 1063) and display these on a tab page that is incorporated into the front page. The following image illustrates the node hierarchy: With my extremely limited appreciation of XSLT, I'm unable to grab the node ID of the "Blog" id and take the 3 pages below that to display in the "Top Posts" part of my site which is found under the "Frontpage Tabs" node. All the examples I find work with the "current page", which is typically the top-level node, "Personal Site". How should I accomplish this?

    Read the article

  • What can i use as a 3d Tile map editor?

    - by alfa64
    I need to make grid based levels with 3d models for a dungeon crawler ( as a recent example Legend of Grimrock), but i need to have several layers and place entities with properties and position, angle, etc. I was considering Tiled, using layers as height for each level, but it's very hard to work with and visualize. What can i use for this pourpose? The output format needs to be json, xml, or something i can use on my engine. Ideally i'd want something like Tiled with a 3d visualization/edit mode and support for loading models or at least some visual representation of them.

    Read the article

  • Maintaining packages with code - Adding a property expression programmatically

    Every now and then I've come across scenarios where I need to update a lot of packages all in the same way. The usual scenario revolves around a group of packages all having been built off the same package template, and something needs to updated to keep up with new requirements, a new logging standard for example.You'd probably start by updating your template package, but then you need to address all your existing packages. Often this can run into the hundreds of packages and clearly that's not a job anyone wants to do by hand. I normally solve the problem by writing a simple console application that looks for files and patches any package it finds, and it is an example of this I'd thought I'd tidy up a bit and publish here. This sample will look at the package and find any top level Execute SQL Tasks, and change the SQL Statement property to use an expression. It is very simplistic working on top level tasks only, so nothing inside a Sequence Container or Loop will be checked but obviously the code could be extended for this if required. The code that actually sets the expression is shown below, the rest is just wrapper code to find the package and to find the task. /// <summary> /// The CreationName of the Tasks to target, e.g. Execute SQL Task /// </summary> private const string TargetTaskCreationName = "Microsoft.SqlServer.Dts.Tasks.ExecuteSQLTask.ExecuteSQLTask, Microsoft.SqlServer.SQLTask, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"; /// <summary> /// The name of the task property to target. /// </summary> private const string TargetPropertyName = "SqlStatementSource"; /// <summary> /// The property expression to set. /// </summary> private const string ExpressionToSet = "@[User::SQLQueryVariable]"; .... // Check if the task matches our target task type if (taskHost.CreationName == TargetTaskCreationName) { // Check for the target property if (taskHost.Properties.Contains(TargetPropertyName)) { // Get the property, check for an expression and set expression if not found DtsProperty property = taskHost.Properties[TargetPropertyName]; if (string.IsNullOrEmpty(property.GetExpression(taskHost))) { property.SetExpression(taskHost, ExpressionToSet); changeCount++; } } } This is a console application, so to specify which packages you want to target you have three options: Find all packages in the current folder, the default behaviour if no arguments are specified TaskExpressionPatcher.exe .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Find all packages in a specified folder, pass the folder as the argument TaskExpressionPatcher.exe C:\Projects\Alpha\Packages\ .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Find a specific package, pass the file path as the argument TaskExpressionPatcher.exe C:\Projects\Alpha\Packages\Package.dtsx The code was written against SQL Server 2005, but just change the reference to Microsoft.SQLServer.ManagedDTS to be the SQL Server 2008 version and it will work fine. If you get an error Microsoft.SqlServer.Dts.Runtime.DtsRuntimeException: The package failed to load due to error 0xC0011008… then check that the package is from the correct version of SSIS compared to the referenced assemblies, 2005 vs 2008 in other words. Download Sample Project TaskExpressionPatcher.zip (6 KB)

    Read the article

  • Value of Certification Proven Again

    - by Paul Sorensen
    Hi Everyone,A recent article in Certification Magazine spells out the value of certification IT professionals and article provides some detailed insight on the state of the economy and what it means to IT professionals. A few of the key findings (among many) by Certification Magazine:Even in this tough economy, IT worker salaries grew at about 9%.Many respondents reported that they received raises after earning a new certification.Many people reported that they had earned one or two new credentials within the last year.Salaries for more entry-level certifications was stagnant over the last year or so.I encourage you to take a look at the article. It is very encouraging to see that companies and individuals still recognize and value the hard work that goes in to getting certified.Thank you,

    Read the article

  • ASP.NET 4 Website Fails to Start on Your TFS 2010 Server?

    - by jdanforth
    Getting a “Could not find permission set named ‘ASP.Net’” error on your TFS 2010 server? It may have to do with the fact you’re trying to run ASP.NET as a child site of a SharePoint Web Site. The problem is described in the “ASP.NET 4 braking changes” site:   This error occurs because the ASP.NET 4 code access security (CAS) infrastructure looks for a permission set named ASP.Net. However, the partial trust configuration file that is referenced by WSS_Minimal does not contain any permission sets with that name. Currently there is not a version of SharePoint available that is compatible with ASP.NET. As a result, you should not attempt to run an ASP.NET 4 Web site as a child site underneath SharePoint Web sites.   There is a workaround you could try by setting this in your web.config, if you know what you’re doing: <trust level="Full" originUrl="" />

    Read the article

  • Anti-aliasing works for debug runtime but not retail runtime

    - by DeadMG
    I'm experimenting with setting various graphical settings in my Direct3D9 application, and I'm currently facing a curious problem with anti-aliasing. When running under the debug runtime, AA works as expected, and I don't have any errors or warnings. But when running under the retail runtime, the image isn't anti-aliased at all. I don't get any errors, the device creates and executes just fine. As I honestly have little idea where the problem is, I will simply give a relatively high-level overview of the architecture involved, rather than specific problematic code. Simply put, I render my 3D content to a texture, which I then render to the back buffer. Any suggestions as to where to look?

    Read the article

  • .NET Security Part 3

    - by Simon Cooper
    You write a security-related application that allows addins to be used. These addins (as dlls) can be downloaded from anywhere, and, if allowed to run full-trust, could open a security hole in your application. So you want to restrict what the addin dlls can do, using a sandboxed appdomain, as explained in my previous posts. But there needs to be an interaction between the code running in the sandbox and the code that created the sandbox, so the sandboxed code can control or react to things that happen in the controlling application. Sandboxed code needs to be able to call code outside the sandbox. Now, there are various methods of allowing cross-appdomain calls, the two main ones being .NET Remoting with MarshalByRefObject, and WCF named pipes. I’m not going to cover the details of setting up such mechanisms here, or which you should choose for your specific situation; there are plenty of blogs and tutorials covering such issues elsewhere. What I’m going to concentrate on here is the more general problem of running fully-trusted code within a sandbox, which is required in most methods of app-domain communication and control. Defining assemblies as fully-trusted In my last post, I mentioned that when you create a sandboxed appdomain, you can pass in a list of assembly strongnames that run as full-trust within the appdomain: // get the Assembly object for the assembly Assembly assemblyWithApi = ... // get the StrongName from the assembly's collection of evidence StrongName apiStrongName = assemblyWithApi.Evidence.GetHostEvidence<StrongName>(); // create the sandbox AppDomain sandbox = AppDomain.CreateDomain( "Sandbox", null, appDomainSetup, restrictedPerms, apiStrongName); Any assembly that is loaded into the sandbox with a strong name the same as one in the list of full-trust strong names is unconditionally given full-trust permissions within the sandbox, irregardless of permissions and sandbox setup. This is very powerful! You should only use this for assemblies that you trust as much as the code creating the sandbox. So now you have a class that you want the sandboxed code to call: // within assemblyWithApi public class MyApi { public static void MethodToDoThings() { ... } } // within the sandboxed dll public class UntrustedSandboxedClass { public void DodgyMethod() { ... MyApi.MethodToDoThings(); ... } } However, if you try to do this, you get quite an ugly exception: MethodAccessException: Attempt by security transparent method ‘UntrustedSandboxedClass.DodgyMethod()’ to access security critical method ‘MyApi.MethodToDoThings()’ failed. Security transparency, which I covered in my first post in the series, has entered the picture. Partially-trusted code runs at the Transparent security level, fully-trusted code runs at the Critical security level, and Transparent code cannot under any circumstances call Critical code. Security transparency and AllowPartiallyTrustedCallersAttribute So the solution is easy, right? Make MethodToDoThings SafeCritical, then the transparent code running in the sandbox can call the api: [SecuritySafeCritical] public static void MethodToDoThings() { ... } However, this doesn’t solve the problem. When you try again, exactly the same exception is thrown; MethodToDoThings is still running as Critical code. What’s going on? By default, a fully-trusted assembly always runs Critical code, irregardless of any security attributes on its types and methods. This is because it may not have been designed in a secure way when called from transparent code – as we’ll see in the next post, it is easy to open a security hole despite all the security protections .NET 4 offers. When exposing an assembly to be called from partially-trusted code, the entire assembly needs a security audit to decide what should be transparent, safe critical, or critical, and close any potential security holes. This is where AllowPartiallyTrustedCallersAttribute (APTCA) comes in. Without this attribute, fully-trusted assemblies run Critical code, and partially-trusted assemblies run Transparent code. When this attribute is applied to an assembly, it confirms that the assembly has had a full security audit, and it is safe to be called from untrusted code. All code in that assembly runs as Transparent, but SecurityCriticalAttribute and SecuritySafeCriticalAttribute can be applied to individual types and methods to make those run at the Critical or SafeCritical levels, with all the restrictions that entails. So, to allow the sandboxed assembly to call the full-trust API assembly, simply add APCTA to the API assembly: [assembly: AllowPartiallyTrustedCallers] and everything works as you expect. The sandboxed dll can call your API dll, and from there communicate with the rest of the application. Conclusion That’s the basics of running a full-trust assembly in a sandboxed appdomain, and allowing a sandboxed assembly to access it. The key is AllowPartiallyTrustedCallersAttribute, which is what lets partially-trusted code call a fully-trusted assembly. However, an assembly with APTCA applied to it means that you have run a full security audit of every type and member in the assembly. If you don’t, then you could inadvertently open a security hole. I’ll be looking at ways this can happen in my next post.

    Read the article

< Previous Page | 410 411 412 413 414 415 416 417 418 419 420 421  | Next Page >