Search Results

Search found 11321 results on 453 pages for 'shared libraries'.

Page 257/453 | < Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >

  • Join Me at JavaOne!

    - by HecklerMark
    JavaOne 2012 is less than a week away! If you've already made plans to be there, you're probably getting pretty excited about it already...and if not, what are you waiting for?!? Before I get to the session information, I want to point out that qualified students get free admission to JavaOne, so if you are (or know) a CS or IT (or other tech-leaning) student who might like to attend, follow the link and start making plans. There is so much there to learn and experience. I'm happy to say I'll be a small part of the festivities. I'll be leading the following session: CON3519 - Building Hybrid Cloud Apps: Local Databases + The Cloud = Extreme Versatility In this session, learn how to design and develop applications that leverage both local storage and the cloud, maximizing the strengths of each. Using NetBeans, JavaServer Faces 2.0, GlassFish Server technology, JavaFX 2, Oracle Database, and Evernote, rapidly create prototypical applications that can be deployed in various environments and scaled up/out with enterprise cloud solutions.  As a contributor to the JFXtras project, I also hope to attend the following "Birds Of a Feather" (BOF) session led by Gerrit Grunwald and Stephen Chin: BOF5503 - JFXtras Super Happy Dev BOF JFXtras, the open source JavaFX control and extensions project, is back for JavaFX 2.0. In this session, you will learn about the latest changes in JFXtras 2.0, including new components, controls, and features that integrate with the JavaFX 2.0 libraries. Expect to meet the JFXtras core team members as well as other interesting client RIA implementers and developers. Now that JavaFX is coded in Java, a few server-side hackers may even be let in the door. If you're there, please stop by and introduce yourself! And to follow along with my J1 travels or keep in contact afterward, please follow me on Twitter or connect via G+ or Facebook (links in panel to right). Hope to see you there, but either way, keep the Java flowing! All the best,Mark 

    Read the article

  • This is the End of Business as Usual...

    - by Michael Snow
    This week, we'll be hosting our last Social Business Thought Leader Series Webcast for 2012. Our featured guest this week will be Brian Solis of Altimeter Group. As we've been going through the preparations for Brian's webcast, it became very clear that an hour's time is barely scraping the surface of the depth of Brian's insights and analysis. Accordingly, in the spirit of sharing Brian's perspective for all of our readers, we'll be featuring guest posts all this week pulled from Brian's larger collection of blog postings on his own website. If you like what you've read here this week, we highly recommend digging deeper into his tome of wisdom. Guest Post by Brian Solis, Analyst, Altimeter Group as originally featured on his site with the minor change of the video addition at the beginning of the post. This is the End of Business as Usual and the Beginning of a New Era of Relevance - Brian Solis, Principal Analyst, Altimeter Group The Times They Are A-Changin’ Come gather ’round people Wherever you roam And admit that the waters Around you have grown And accept it that soon You’ll be drenched to the bone If your time to you Is worth savin’ Then you better start swimmin’ Or you’ll sink like a stone For the times they are a-changin’. - Bob Dylan I’m sure you are wondering why I chose lyrics to open this article. If you skimmed through them, stop here for a moment. Go back through the Dylan’s words and take your time. Carefully read, and feel, what it is he’s saying and savor the moment to connect the meaning of his words to the challenges you face today. His message is as important and true today as it was when they were first written in 1964. The tide is indeed once again turning. And even though the 60s now live in the history books, right here, right now, Dylan is telling us once again that this is our time to not only sink or swim, but to do something amazing. This is your time. This is our time. But, these times are different and what comes next is difficult to grasp. How people communicate. How people learn and share. How people make decisions. Everything is different now. Think about this…you’re reading this article because it was sent to you via email. Yet more people spend their online time in social networks than they do in email. Duh. According to Nielsen, of the total time spent online 22.5% are connecting and communicating in social networks. To put that in perspective, the time spent in the likes of Facebook, Twitter, and Youtube is greater than online gaming at 9.8%, email at 7.6% and search at 4%. Imagine for a moment if you and I were connected to one another in Facebook, which just so happens to be the largest social network in the world. How big? Well, Facebook is the size today of the entire Internet in 2004. There are over 1 billion people friending, Liking, commenting, sharing, and engaging in Facebook…that’s roughly 12% of the world’s population. Twitter has over 200 million users. Ever hear of tumblr? More time is spent on this popular microblogging community than Twitter. The point is that the landscape for communication and all that’s affected by human interaction is profoundly different than how you and I learned, shared or talked to one another yesterday. This transformation is only becoming more pervasive and, it’s not going back. Survival of the Fitting But social media is just one of the channels we can use to reach people. I must be honest. I’m as much a part of tomorrow as I am of yesteryear. It’s why I spend all of my time researching the evolution of media and its impact on business and culture. Because of you, I share everything I learn in newsletters, emails, blogs, Youtube videos, and also traditional books. I’m dedicated to helping everyone not only understand, but grasp the change that’s before you. Technologies such as social, mobile, virtual, augmented, et al compel us adapt our story and value proposition and extend our reach to be part of communities we don’t realize exist. The people who will keep you in business or running tomorrow are the very people you’re not reaching today. Before you continue to read on, allow me to clarify my point of view. My inspiration for writing this is to help you augment, not necessarily replace, the programs you’re running today. We must still reach those whom matter to us in the ways they prefer to be engaged. To reach what I call the connected consumer of Geneeration-C we must too reach them in the ways they wish to be engaged. And in all of my work, how they connect, talk to one another, influence others, and make decisions are not at all like the traditional consumers of the past. Nor are they merely the kids…the Millennial. Connected consumers are representative across every age group and demographic. As you can see, use of social networks, media sharing sites, microblogs, blogs, etc. equally span across Gen Y, Gen X, and Baby Boomers. The DNA of connected customers is indiscriminant of age or any other demographic for that matter. This is more about psychographics, the linkage of people through common interests (than it is their age, gender, education, nationality or level of income. Once someone is introduced to the marvels of connectedness, the sensation becomes a contagion. It touches and affects everyone. And, that’s why this isn’t going anywhere but normalcy. Social networking isn’t just about telling people what you’re doing. Nor is it just about generic, meaningless conversation. Today’s connected consumer is incredibly influential. They’re connected to hundreds and even thousands of other like-minded people. What they experiences, what they support, it’s shared throughout these networks and as information travels, it shapes and steers impressions, decisions, and experiences of others. For example, if we revisit the Nielsen research, we get an idea of just how big this is becoming. 75% spend heavily on music. How does that translate to the arts? I’d imagine the number is equally impressive. If 53% follow their favorite brand or organization, imagine what’s possible. Just like this email list that connects us, connections in social networks are powerful. The difference is however, that people spend more time in social networks than they do in email. Everything begins with an understanding of the “5 W’s and H.E.” – Who, What, When, Where, How, and to What Extent? The data that comes back tells you which networks are important to the people you’re trying to reach, how they connect, what they share, what they value, and how to connect with them. From there, your next steps are to create a community strategy that extends your mission, vision, and value and it align it with the interests, behavior, and values of those you wish to reach and galvanize. To help, I’ve prepared an action list for you, otherwise known as the 10 Steps Toward New Relevance: 1. Answer why you should engage in social networks and why anyone would want to engage with you 2. Observe what brings them together and define how you can add value to the conversation 3. Identify the influential voices that matter to your world, recognize what’s important to them, and find a way to start a dialogue that can foster a meaningful and mutually beneficial relationship 4. Study the best practices of not just organizations like yours, but also those who are successfully reaching the type of people you’re trying to reach – it’s benching marking against competitors and benchmarking against undefined opportunities 5. Translate all you’ve learned into a convincing presentation written to demonstrate tangible opportunity to your executive board, make the case through numbers, trends, data, insights – understanding they have no idea what’s going on out there and you are both the scout and the navigator (start with a recommended pilot so everyone can learn together) 6. Listen to what they’re saying and develop a process to learn from activity and adapt to interests and steer engagement based on insights 7. Recognize how they use social media and innovate based on what you observe to captivate their attention 8. Align your objectives with their objectives. If you’re unsure of what they’re looking for…ask 9. Invest in the development of content, engagement 10. Build a community, invest in values, spark meaningful dialogue, and offer tangible value…the kind of value they can’t get anywhere else. Take advantage of the medium and the opportunity! The reality is that we live and compete in a perpetual era of Digital Darwinism, the evolution of consumer behavior when society and technology evolve faster than our ability to adapt. This is why it’s our time to alter our course. We must connect with those who are defining the future of engagement, commerce, business, and how the arts are appreciated and supported. Even though it is the end of business as usual, it is the beginning of a new age of opportunity. The consumer revolution is already underway, and the question is: How do you better understand the role you play in this production as a connected or social consumer as well as business professional? Again, this is your time to define a new era of engagement and relevance. Originally written for The National Arts Marketing Project Connect with Brian via: Twitter | LinkedIn | Facebook | Google+ --- Note from Michael: If you really like this post above - check out Brian's TEDTalk and his thought process for preparing it in this post: 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} http://www.briansolis.com/2012/10/tedtalk-reinventing-consumer-capitalism-screw-business-as-usual/

    Read the article

  • Are You Using Windows Live Mesh?

    - by Ben Griswold
    Most of the time, I’m the guy who authors the show notes for the Herding Code Podcast.  The workflow is relatively straight-forward: Jon shares the pre-production audio with me, I compete my write up and then ship the notes back to Jon for publishing with the edited audio.  All file sharing is all done with shared folders in the Windows Live Mesh. The director of my kid’s preschool was looking for a way to access her work computer from her home office.  VPN connection?  Remote desktop?  FTP?  Nope. I installed Windows Live Mesh in a matter of minutes, synchronized a number of folders and she was off and running.  (The neat thing is she’s running a PC in the office and a Mac at home.) I was using Dropbox before discovering Mesh. Dropbox is still very cool but I’m in and out of Mesh enough that it’s taken over.  Actually I still have a Dropbox folder – it’s just being synched by Mesh now. If you’re interested in giving Live Mesh a whirl, here’ are the notable links as found on the product’s site: What you need Create your mesh Sync folders Share folders Use your Live Desktop Connect to a remote computer Use a mobile phone Good luck!

    Read the article

  • Managing Personal Projects As Solo Developer - Getting out of depth and failing projects

    - by James Jeffery
    I need some advice on project management. I start a project, and often times it will a large project for a solo developer. Usually its a web project. I handle everything from the UI, to the JS, PHP, server management etc. Half way in I feel out of my depth. I lose where I am, so I spend a couple of days away from the project to avoid the stress and before you know it, it becomes another unfinished project. I try to use frameworks and code libraries to make my developments easier on myself. Sometimes I will complete a project so it "works" and then go back and handle errors, design the UI properly and stuff. But without fail I will always end up out of my depth. I've though about outsourcing tasks such as the UI, and the behaviour, and focusing just on the PHP - which I feel is my strong point. But then pride kicks in, and I don't feel at one with a project I haven't completed myself. Does this make sense? I am sure there are many others who have felt like this either at home, or at work, and I would love some advice on managing my projects better.

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 20 (sys.dm_tran_locks)

    - by Tamarick Hill
    The sys.dm_tran_locks DMV is used to return active lock resources on your server. Locking is a mechanism used by SQL Server to protect the integrity of data when you have multiple users that may potentially access the same data at the same time. Let’s run a query against this DMV so we can analyze the results. SELECT * FROM sys.dm_tran_locks As we can see, its a lot of lock information returned from this DMV. I will not go into detail about each of the columns returned, but I will touch on the ones that I feel are the most important. The first column in the output is the resource_type column which tells you the type of lock a particular row represents. It could be a PAGE lock, RID, OBJECT, DATABASE, or several other lock types. The resource_database_id represents the id of the database for a particular lock resource. The resource_lock_partition column represents the ID of a lock partition. When you have a table that is partitioned, locks can be escalated to the partition level before going to a table level lock. The request_mode column gives us information about the type of lock that is being requested. From the screenshots above we see RangeS-S locks which represent a share range lock and IS locks which represent Intent Shared locks. The request_status column displays whether the lock has been granted or whether the lock is waiting to be acquired. The request_session_id  shows the session_id that is requesting the lock. This DMV is the best place to go when you need to identify the exact locks that are being held or pending for individual requests. You might need this information when you are troubleshooting severe blocking or deadlocking problems on your server. For more information on this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms190345.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • Ask The Readers: What Are Your Best Malware Fighting Tricks?

    - by Jason Fitzpatrick
    Malware has become increasingly sophisticated and widespread; it’s more important than ever to have a robust toolkit for dealing with it. This week we want to hear about your favorite tips and tricks for dealing with malware infestations. Photo background by clix. Dealing with malware infestations usually takes more than simply running an anti-virus scanner. This week we want to hear your best tips, tricks, and unique tools for dealing with malware on your computer or, more likely, the computers of unwitting friends and relatives. Here’s a few tips we’ve shared in the past to highlight what we’re talking about when we ask for tips (as opposed to simple recommendations for a certain AV application): Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How To Remove Internet Security 2010 and other Rogue/Fake Antivirus Malware How To Remove Antivirus Live and Other Rogue/Fake Antivirus Malware How To Remove Security Tool and other Rogue/Fake Antivirus Malware Latest Features How-To Geek ETC Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions Add a “Textmate Style” Lightweight Text Editor with Dropbox Syncing to Chrome and Iron Is the Forcefield Really On or Not? [Star Wars Parody Video] Google Updates Picasa Web Albums; Emphasis on Sharing and Showcasing Uwall.tv Turns YouTube into a Video Jukebox Early Morning Sunrise at the Beach Wallpaper Data Networks Visualized via Light Paintings [Video]

    Read the article

  • Couldn't find package - But package is listed in the Packages file

    - by Chris
    (Quoted items are redacted elements) I am using a private repository and an currently trying to repackage some packages 3rd-party packages. I extract the package, make a few modifications (just the control files to fit with company policy - though sometimes file install locations though not in this case) and repackage (and usually rename). Normally I copy the files into a new blank debhelper project and reconstruct the package, however, with a recent one I attempting to convert and some libraries and stuff aren't linking properly (I did copy the postinst, postrm, and preinst files along with all DEDIAN files exactly), the original package worked, but my repackage doesn't, despite providing the same files in the same locations and the same postinst and preinst. So I was attempting to just modify the current packages control files (as the original package is not very good and will not list in our repository and getting a better one from the 3rd party is not an option). I also renamed the package. I did the following: dpkg-deb -R "directory" Modify DEBIAN/control dpkg-deb -b "directory" "package name I want" I did this and put it in our repository. The package shows up in the "Packages" file on the repository and running apt-get update on the client side shows the package in: /var/lib/apt/lists/"server"_"location"_Packages However when I do an apt-get install on the package name (as listed in the Packages file - I did a copy paste) it says it can't find the package. Same with an apt-cache search The Packages listings is as follow (name redacted): Package: "package name" Priority: extra Section: unknown Maintainer: "maintainer" Architecture: any Version: 1.0-lucid5 Depends: libc Filename: "directory"/"package_filename" Size: 2206292 MD5sum: "md5sum" SHA1: "sha key" SHA256: "sha256 key" Description: "description" I am running as sudo (and tried as root as well). I don't understand why apt-get won't see the package. Can you point out any flaws in what I have done, or perhaps some help on getting apt-get to properly see the package. Or perhaps an alternative. I am not even sure if this is a valid way to repackage something. Thanks.

    Read the article

  • ArchBeat Link-o-Rama for December 11, 2012

    - by Bob Rhubart
    Good To Know - Conflicting View Objects and Shared Entity | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovskis shares his thoughts—and a sample application—dealing with an "interesting ADF behavior" encountered over the weekend. Patching Oracle Exalogic - Updating Linux on the Compute Nodes - Part 1 | Jos Nijhoff Jos Nijhoff launches a series of posts the deal with "patching the operating system on the modified Sun Fire X4170 M2 servers...dubbed compute nodes in Exalogic terminology." Expanding on requestaudit - Tracing who is doing what...and for how long | Kyle Hatlestad "One of the most helpful tracing sections in WebCenter Content (and one that is on by default) is the requestaudit tracing," says Oracle Fusion Middleware A-Team architect Kyle Hatlestad. Get up close and technical in his post. Oracle Data Integrator Presentation from NYOUG Webinar | Gurcan Orhan Oracle ACE Director and award-winning data warehouse architect Gurcan Orhan shares his presentation from the recent NYOUG LI SIG. SOA 11g Technology Adapters – ECID Propagation | Greg Mally "Many SOA Suite 11g deployments include the use of the technology adapters for various activities including integration with FTP, database, and files to name a few," says Oracle Fusion Middleware A-Team member Greg Mally. "Although the integrations with these adapters are easy and feature rich, there can be some challenges from the operations perspective." Greg's post focuses on technical tips for dealing with one of these challenges. Missing Duties for RUP3 upgrade in Fusion Applications Richard from the Oracle Fusion Middleware A-Team explains how to safely apply policy store changes in thirteen easy steps. Thought for the Day "Well over half of the time you spend working on a project (on the order of 70 percent) is spent thinking, and no tool, no matter how advanced, can think for you." — Frederick P. Brooks Source: SoftwareQuotes.com

    Read the article

  • Optimizing Solaris 11 SHA-1 on Intel Processors

    - by danx
    SHA-1 is a "hash" or "digest" operation that produces a 160 bit (20 byte) checksum value on arbitrary data, such as a file. It is intended to uniquely identify text and to verify it hasn't been modified. Max Locktyukhin and others at Intel have improved the performance of the SHA-1 digest algorithm using multiple techniques. This code has been incorporated into Solaris 11 and is available in the Solaris Crypto Framework via the libmd(3LIB), the industry-standard libpkcs11(3LIB) library, and Solaris kernel module sha1. The optimized code is used automatically on systems with a x86 CPU supporting SSSE3 (Intel Supplemental SSSE3). Intel microprocessor architectures that support SSSE3 include Nehalem, Westmere, Sandy Bridge microprocessor families. Further optimizations are available for microprocessors that support AVX (such as Sandy Bridge). Although SHA-1 is considered obsolete because of weaknesses found in the SHA-1 algorithm—NIST recommends using at least SHA-256, SHA-1 is still widely used and will be with us for awhile more. Collisions (the same SHA-1 result for two different inputs) can be found with moderate effort. SHA-1 is used heavily though in SSL/TLS, for example. And SHA-1 is stronger than the older MD5 digest algorithm, another digest option defined in SSL/TLS. Optimizations Review SHA-1 operates by reading an arbitrary amount of data. The data is read in 512 bit (64 byte) blocks (the last block is padded in a specific way to ensure it's a full 64 bytes). Each 64 byte block has 80 "rounds" of calculations (consisting of a mixture of "ROTATE-LEFT", "AND", and "XOR") applied to the block. Each round produces a 32-bit intermediate result, called W[i]. Here's what each round operates: The first 16 rounds, rounds 0 to 15, read the 512 bit block 32 bits at-a-time. These 32 bits is used as input to the round. The remaining rounds, rounds 16 to 79, use the results from the previous rounds as input. Specifically for round i it XORs the results of rounds i-3, i-8, i-14, and i-16 and rotates the result left 1 bit. The remaining calculations for the round is a series of AND, XOR, and ROTATE-LEFT operators on the 32-bit input and some constants. The 32-bit result is saved as W[i] for round i. The 32-bit result of the final round, W[79], is the SHA-1 checksum. Optimization: Vectorization The first 16 rounds can be vectorized (computed in parallel) because they don't depend on the output of a previous round. As for the remaining rounds, because of step 2 above, computing round i depends on the results of round i-3, W[i-3], one can vectorize 3 rounds at-a-time. Max Locktyukhin found through simple factoring, explained in detail in his article referenced below, that the dependencies of round i on the results of rounds i-3, i-8, i-14, and i-16 can be replaced instead with dependencies on the results of rounds i-6, i-16, i-28, and i-32. That is, instead of initializing intermediate result W[i] with: W[i] = (W[i-3] XOR W[i-8] XOR W[i-14] XOR W[i-16]) ROTATE-LEFT 1 Initialize W[i] as follows: W[i] = (W[i-6] XOR W[i-16] XOR W[i-28] XOR W[i-32]) ROTATE-LEFT 2 That means that 6 rounds could be vectorized at once, with no additional calculations, instead of just 3! This optimization is independent of Intel or any other microprocessor architecture, although the microprocessor has to support vectorization to use it, and exploits one of the weaknesses of SHA-1. Optimization: SSSE3 Intel SSSE3 makes use of 16 %xmm registers, each 128 bits wide. The 4 32-bit inputs to a round, W[i-6], W[i-16], W[i-28], W[i-32], all fit in one %xmm register. The following code snippet, from Max Locktyukhin's article, converted to ATT assembly syntax, computes 4 rounds in parallel with just a dozen or so SSSE3 instructions: movdqa W_minus_04, W_TMP pxor W_minus_28, W // W equals W[i-32:i-29] before XOR // W = W[i-32:i-29] ^ W[i-28:i-25] palignr $8, W_minus_08, W_TMP // W_TMP = W[i-6:i-3], combined from // W[i-4:i-1] and W[i-8:i-5] vectors pxor W_minus_16, W // W = (W[i-32:i-29] ^ W[i-28:i-25]) ^ W[i-16:i-13] pxor W_TMP, W // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) movdqa W, W_TMP // 4 dwords in W are rotated left by 2 psrld $30, W // rotate left by 2 W = (W >> 30) | (W << 2) pslld $2, W_TMP por W, W_TMP movdqa W_TMP, W // four new W values W[i:i+3] are now calculated paddd (K_XMM), W_TMP // adding 4 current round's values of K movdqa W_TMP, (WK(i)) // storing for downstream GPR instructions to read A window of the 32 previous results, W[i-1] to W[i-32] is saved in memory on the stack. This is best illustrated with a chart. Without vectorization, computing the rounds is like this (each "R" represents 1 round of SHA-1 computation): RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR With vectorization, 4 rounds can be computed in parallel: RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR Optimization: AVX The new "Sandy Bridge" microprocessor architecture, which supports AVX, allows another interesting optimization. SSSE3 instructions have two operands, a input and an output. AVX allows three operands, two inputs and an output. In many cases two SSSE3 instructions can be combined into one AVX instruction. The difference is best illustrated with an example. Consider these two instructions from the snippet above: pxor W_minus_16, W // W = (W[i-32:i-29] ^ W[i-28:i-25]) ^ W[i-16:i-13] pxor W_TMP, W // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) With AVX they can be combined in one instruction: vpxor W_minus_16, W, W_TMP // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) This optimization is also in Solaris, although Sandy Bridge-based systems aren't widely available yet. As an exercise for the reader, AVX also has 256-bit media registers, %ymm0 - %ymm15 (a superset of 128-bit %xmm0 - %xmm15). Can %ymm registers be used to parallelize the code even more? Optimization: Solaris-specific In addition to using the Intel code described above, I performed other minor optimizations to the Solaris SHA-1 code: Increased the digest(1) and mac(1) command's buffer size from 4K to 64K, as previously done for decrypt(1) and encrypt(1). This size is well suited for ZFS file systems, but helps for other file systems as well. Optimized encode functions, which byte swap the input and output data, to copy/byte-swap 4 or 8 bytes at-a-time instead of 1 byte-at-a-time. Enhanced the Solaris mdb(1) and kmdb(1) debuggers to display all 16 %xmm and %ymm registers (mdb "$x" command). Previously they only displayed the first 8 that are available in 32-bit mode. Can't optimize if you can't debug :-). Changed the SHA-1 code to allow processing in "chunks" greater than 2 Gigabytes (64-bits) Performance I measured performance on a Sun Ultra 27 (which has a Nehalem-class Xeon 5500 Intel W3570 microprocessor @3.2GHz). Turbo mode is disabled for consistent performance measurement. Graphs are better than words and numbers, so here they are: The first graph shows the Solaris digest(1) command before and after the optimizations discussed here, contained in libmd(3LIB). I ran the digest command on a half GByte file in swapfs (/tmp) and execution time decreased from 1.35 seconds to 0.98 seconds. The second graph shows the the results of an internal microbenchmark that uses the Solaris libpkcs11(3LIB) library. The operations are on a 128 byte buffer with 10,000 iterations. The results show operations increased from 320,000 to 416,000 operations per second. Finally the third graph shows the results of an internal kernel microbenchmark that uses the Solaris /kernel/crypto/amd64/sha1 module. The operations are on a 64Kbyte buffer with 100 iterations. third graph shows the results of an internal kernel microbenchmark that uses the Solaris /kernel/crypto/amd64/sha1 module. The operations are on a 64Kbyte buffer with 100 iterations. The results show for 1 kernel thread, operations increased from 410 to 600 MBytes/second. For 8 kernel threads, operations increase from 1540 to 1940 MBytes/second. Availability This code is in Solaris 11 FCS. It is available in the 64-bit libmd(3LIB) library for 64-bit programs and is in the Solaris kernel. You must be running hardware that supports Intel's SSSE3 instructions (for example, Intel Nehalem, Westmere, or Sandy Bridge microprocessor architectures). The easiest way to determine if SSSE3 is available is with the isainfo(1) command. For example, nehalem $ isainfo -v $ isainfo -v 64-bit amd64 applications sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu If the output also shows "avx", the Solaris executes the even-more optimized 3-operand AVX instructions for SHA-1 mentioned above: sandybridge $ isainfo -v 64-bit amd64 applications avx xsave pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications avx xsave pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu No special configuration or setup is needed to take advantage of this code. Solaris libraries and kernel automatically determine if it's running on SSSE3 or AVX-capable machines and execute the correctly-tuned code for that microprocessor. Summary The Solaris 11 Crypto Framework, via the sha1 kernel module and libmd(3LIB) and libpkcs11(3LIB) libraries, incorporated a useful SHA-1 optimization from Intel for SSSE3-capable microprocessors. As with other Solaris optimizations, they come automatically "under the hood" with the current Solaris release. References "Improving the Performance of the Secure Hash Algorithm (SHA-1)" by Max Locktyukhin (Intel, March 2010). The source for these SHA-1 optimizations used in Solaris "SHA-1", Wikipedia Good overview of SHA-1 FIPS 180-1 SHA-1 standard (FIPS, 1995) NIST Comments on Cryptanalytic Attacks on SHA-1 (2005, revised 2006)

    Read the article

  • UPK Professional Customer Success Story: Medtronic

    - by [email protected]
    In case you missed the live event, be sure to listen to last week's UPK Customer iSeminar featuring Medtronic. This was the first iSeminar in our quarterly series to showcase UPK Professional (UPK and Knowledge Pathways). Donna Miller and Staci Gilbert gave viewers an inside look at samples of Medtronic's content as they shared their experiences, methodology and best practices for use of the solution. Here are some highlights of the call: • Medtronic initially purchased UPK Professional to support a multi-year, global SAP rollout for 9,000 end users located in 24 countries. • As time went on, they expanded their use of UPK Professional to include several of their other enterprise applications: PeopleSoft, Siebel CRM, Hyperion Financial Management, a number of SAP bolt-ons, Documentum, TrackWise, and many others. • In combination with their Saba LMS, UPK Professional has allowed Medtronic to create, deploy, track and certify consistent end user training for critical transactions and processes across their organization worldwide - essential for a company in a heavily regulated industry. • For key pieces of content or certain end user populations, some Medtronic business units localize/translate the global UPK content. Staci demonstrated examples of their SAP content which has been translated into Japanese. • In the live SAP environment, end users rely on UPK's context sensitive in-application performance support. Medtronic has found this to be very helpful post go-live, giving just-in-time support so end users are confident in a new system or when performing tasks they don't often touch (at quarter or year end). UPK also serves as Medtronic's internal Google. • Medtronic has realized savings on many fronts: reduction in support calls due to in-application performance support, elimination of their training clients, and speedier training (1.5 days rather than 5-7 days) of temporary workers by moving from ILT to a blended solution that includes UPK simulations for eLearning. Thanks again to Donna and Staci for an exceptional presentation. They offered so many great examples for anyone who's looking for ways to get more out of UPK or interested in learning about UPK Professional: Knowledge Pathways. - Karen Rihs, Oracle UPK Outbound Product Management

    Read the article

  • How to Setup an Active Directory Domain-Week 26

    - by OWScott
    Today's lesson covers how to create an Active Directory domain and join a member server to it. This week's topic takes a slightly different turn from the normally IIS related topics, but this is key video to help setup either a test or production environment that requires Active Directory. Part of being a web administrator is understanding the servers and how they interact with each other. This week’s lesson takes a different path than usual and covers how to create an Active Directory domain and how to join a member computer to that domain. In less than 13 minutes we complete the entire process, end to end. An understanding of Active Directory is useful, whether it’s simply to setup a test lab, or to learn more so that you can manage a production domain environment. This week starts a mini-series on web farms. Today’s lesson is on setting up a domain which is a necessary prerequisite for next week which will be on Distributed File System Replication (DFS-R), a useful technology for web farms. Upcoming lessons will cover shared configuration, Application Request Routing (ARR), and more. Additionally, this video introduces us to Vaasnet (www.vaasnet.com), a service that allows the web pro to gain immediate access to an entire lab environment for situations such as these. This is week 26 (the middle week!) of a 52 week series for the Web Pro. Past and future videos can be found here: http://dotnetslackers.com/projects/LearnIIS7/ You can find this week’s video here.

    Read the article

  • SQL SERVER – Retrieve SQL Server Installation Date Time

    - by pinaldave
    I have been asked this question number of times and my answer always have been – search online and you will find the answer. Every single time when someone has followed my answer – they have found accurate answer in first few clicks. However increasingly this question getting very popular so I have decided to answer this question here. I usually prefer to create my own T-SQL script but in today’s case, I have taken the script from web. I have seen this script at so many places I do not know who is original creator so not sure who should get credit for the same. Question: How to retrieve SQL Server Installation date? Answer: Run following query and it will give you date of SQL Server Installation. SELECT create_date FROM sys.server_principals WHERE sid = 0x010100000000000512000000 Question: I have installed SQL Server Evaluation version how do I know what is the expiry date for it? Answer: SQL Server evaluation period is for 180 days. The expiration date is always 180 days from the initial installation. Following query will give an expiration date of evaluation version. -- Evaluation Version Expire Date SELECT create_date AS InstallationDate, DATEADD(DD, 180, create_date) AS 'Expiry Date' FROM sys.server_principals WHERE sid = 0x010100000000000512000000 GO I believe there is a way to do the same using registry but I have not explored it personally. Now as I said earlier there are many different blog posts on this subject. Let me list a few which I really enjoyed to read personally as they shared few more insights over this subject. Retrieving SQL Server 2012 Evaluation Period Expiry Date How to find the Installation Date for an Evaluation Edition of SQL Server Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL DateTime, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • What is the proper way to Windows 7/Ubuntu 10.10 Dual-Triple Boot Partitioning for Laptop OEM?

    - by Denja
    Hi Linux Community, I find my self struggling with the slowness of windows OS once again. It's Time to change with the Ubuntu 10.10 64bit for I like to use a faster Operating System. My Hard Disk laptop has a RECOVERY and HP_TOOLS partition they are both Primary. I Have the System Recovery DVD for Windows 64bit should anything bad happen. Here's the layout I used with windows before: * (C:) Windows 7 system partition NTFS - 284,89GB (Primary,ad Boot,Pagefile,Dump) * HP_TOOLS system partition FAT32 - 99MB (Primary) * (D:) RECOVERY partition NTFS - 12,90GB (Primary) * SYSTEM partition NTFS 199MB (Primary) Here's the layout I wanted to make: * (C:) Windows 7 system partition NTFS - 60GB (Primary) (sda1) * (D:) Windows DATA partition (user files) NTFS - 120GB(Primary)(sda2);wanna share with Linux * Linux root Ext4 - 10GB (Extended)(sda3) (Ubuntu 10.10 64bit) * Linux home Ext3 - 90GB (Extended)(sda4) (Ubuntu 10.10 64bit) * Linux swap swap- RAM size, 3GB (sda5) * Linux root Ext3- 18GB (Extended) (sda6) (OpenSuse or Puppy or kubuntu) Here is my New Ubuntu 10.10 64bit layout in use now: * SYSTEM partition NTFS 199MB (Primary) (sda1) * (C:) Windows 7 system partition NTFS - 90GB (Primary) (sda2) * (D:) Windows 7 RECOVERY partition NTFS - 12,90GB (Primary) (sda3) * Linux system partition EXTENDED - 195,1GB (Logical) * Linux root Ext4- 10GB (Extended) (sda4) * Linux swap swap- RAMx2 size, 6,1GB (sda5) * Linux home Ext3- 179GB (Extended) (sda6) When I installed Ubuntu,I didn't know if I could wipe all previous partitions,because of the RECOVERY partition. So I just made the space for my extended partition with GParted by deleting the HP_TOOLS (Fat32). By doing this I managed somehow to install Ubuntu 64 with Success. And I also made the partitions for the swap or a third Linux OS as Jordan suggested. But I couldn't actually make the partitions for the shared NTFS.(no option!) Question 1: What is the proper way to Windows 7/Ubuntu 10.10 Dual-Triple Boot Partitioning for Laptop OEM?? Thank you in advance for your advises and suggestions and Happy New Year to All!!

    Read the article

  • SQL SERVER – Get File Statistics Using fn_virtualfilestats

    - by pinaldave
    Quite often when I am staring at my SSMS I wonder what is going on under the hood in my SQL Server. I often want to know which database is very busy and which database is bit slow because of IO issue. Sometime, I think at the file level as well. I want to know which MDF or NDF is busiest and doing most of the work. Following query gets the same results very quickly. SELECT DB_NAME(vfs.DbId) DatabaseName, mf.name, mf.physical_name, vfs.BytesRead, vfs.BytesWritten, vfs.IoStallMS, vfs.IoStallReadMS, vfs.IoStallWriteMS, vfs.NumberReads, vfs.NumberWrites, (Size*8)/1024 Size_MB FROM ::fn_virtualfilestats(NULL,NULL) vfs INNER JOIN sys.master_files mf ON mf.database_id = vfs.DbId AND mf.FILE_ID = vfs.FileId GO When you run above query you will get many valuable information like what is the size of the file as well how many times the reads and writes are done for each file. It also displays the read/write data in bytes. Due to IO if there has been any stall (delay) in read or write, you can know that as well. I keep this handy but have not shared on blog earlier. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL View, T SQL, Technology Tagged: Statistics

    Read the article

  • Wrong perspective is showing in Eclipse plugin project [closed]

    - by Arun Kumar Choudhary
    I am working in Eclipse Modeling Framework (Eclipse plugin development) in my project the tool(project i am working) provides three perspectives. 1.Accelerator Analyst perspective 2.Contract Validation and 3.Underwriter rules Editor... By default it starts with Contract validation perspective (As we define it within the plugin_customization.ini). However after switching to other perspective does not change the perspective shown... As all perspective (Class, Id and Name) is define only inside Plugin.XML as it is the task of org.eclipse.ui.perspective that that perspective name should be come forefront. Out of 10 7 times it is working fine but I am not getting why this is not working in that 3 cases. I am pasting my plugin.XML file <?xml version="1.0" encoding="UTF-8"?> <?eclipse version="3.0"?> <plugin> <extension id="RuleEditor.application" name="Accelerator Tooling" point="org.eclipse.core.runtime.applications"> <application> <run class="com.csc.fs.underwriting.product.UnderWritingApplication"> </run> </application> </extension> <extension point="org.eclipse.ui.perspectives"> <perspective class="com.csc.fs.underwriting.product.ContractValidationPerspective" icon="icons/javadevhov_obj.gif" id="com.csc.fs.underwriting.product.ContractValidationPerspective" name="Contract Validation"> </perspective> </extension> <extension point="org.eclipse.ui.perspectives"> <perspective class="com.csc.fs.underwriting.product.UnderwritingPerspective" icon="icons/javadevhov_obj.gif" id="com.csc.fs.underwriting.product.UnderwritingPerspective" name="Underwriting"> </perspective> </extension> <extension id="product" point="org.eclipse.core.runtime.products"> <product application="com.csc.fs.nba.underwriting.application.RuleEditor.application" name="Rule Configurator Workbench" description="%AppName"> <property name="introTitle" value="Welcome to Accelerator Tooling"/> <property name="introVer" value="%version"/> <property name="introBrandingImage" value="product:csclogo.png"/> <property name="introBrandingImageText" value="CSC FSG"/> <property name="preferenceCustomization" value="plugin_customization.ini"/> <property name="appName" value="Rule Configurator Workbench"> </property> </product> </extension> <extension point="org.eclipse.ui.intro"> <intro class="org.eclipse.ui.intro.config.CustomizableIntroPart" icon="icons/Welcome.gif" id="com.csc.fs.nba.underwriting.intro"/> <introProductBinding introId="com.csc.fs.nba.underwriting.intro" productId="com.csc.fs.nba.underwriting.application.product"/> <intro class="org.eclipse.ui.intro.config.CustomizableIntroPart" id="com.csc.fs.nba.underwriting.application.intro"> </intro> <introProductBinding introId="com.csc.fs.nba.underwriting.application.intro" productId="com.csc.fs.nba.underwriting.application.product"> </introProductBinding> </extension> <extension name="Accelerator Tooling" point="org.eclipse.ui.intro.config"> <config content="$nl$/intro/introContent.xml" id="org.eclipse.platform.introConfig.mytest" introId="com.csc.fs.nba.underwriting.intro"> <presentation home-page-id="news"> <implementation kind="html" os="win32,linux,macosx" style="$nl$/intro/css/shared.css"/> </presentation> </config> <config content="introContent.xml" id="com.csc.fs.nba.underwriting.application.introConfigId" introId="com.csc.fs.nba.underwriting.application.intro"> <presentation home-page-id="root"> <implementation kind="html" os="win32,linux,macosx" style="content/shared.css"> </implementation> </presentation> </config> </extension> <extension point="org.eclipse.ui.intro.configExtension"> <theme default="true" id="org.eclipse.ui.intro.universal.circles" name="%theme.name.circles" path="$nl$/themes/circles" previewImage="themes/circles/preview.png"> <property name="introTitle" value="Accelerator Tooling"/> <property name="introVer" value="%version"/> </theme> </extension> <extension point="org.eclipse.ui.ide.resourceFilters"> <filter pattern="*.dependency" selected="true"/> <filter pattern="*.producteditor" selected="true"/> <filter pattern="*.av" selected="true"/> <filter pattern=".*" selected="true"/> </extension> <extension point="org.eclipse.ui.splashHandlers"> <splashHandler class="com.csc.fs.nba.underwriting.application.splashHandlers.InteractiveSplashHandler" id="com.csc.fs.nba.underwriting.application.splashHandlers.interactive"> </splashHandler> <splashHandler class="com.csc.fs.underwriting.application.splashHandlers.InteractiveSplashHandler" id="com.csc.fs.underwriting.application.splashHandlers.interactive"> </splashHandler> <splashHandlerProductBinding productId="com.csc.fs.nba.underwriting.application" splashId="com.csc.fs.underwriting.application.splashHandlers.interactive"> </splashHandlerProductBinding> </extension> <extension id="com.csc.fs.pa.security" point="com.csc.fs.pa.security.implementation.secure"> <securityImplementation class="com.csc.fs.pa.security.PASecurityImpl"> </securityImplementation> </extension> <extension id="productApplication.security.pep" name="com.csc.fs.pa.producteditor.application.security.pep" point="com.csc.fs.pa.security.implementation.authorize"> <authorizationManager class="com.csc.fs.pa.security.authorization.PAAuthorizationManager"> </authorizationManager> </extension> <extension point="org.eclipse.ui.editors"> <editor class="com.csc.fs.underwriting.product.editors.PDFViewer" extensions="pdf" icon="icons/pdficon_small.gif" id="com.csc.fs.pa.producteditor.application.editors.PDFViewer" name="PDF Viewer"> </editor> </extension> <extension point="org.eclipse.ui.views"> <category id="com.csc.fs.pa.application.viewCategory" name="%category"> </category> </extension> <extension point="org.eclipse.ui.newWizards"> <category id="com.csc.fs.pa.application.newWizardCategory" name="%category"> </category> <category id="com.csc.fs.pa.application.newWizardInitialize" name="%initialize" parentCategory="com.csc.fs.pa.application.newWizardCategory"> </category> </extension> <extension point="com.csc.fs.pa.common.usability.addNewCategory"> <addNewCategoryId id="com.csc.fs.pa.application.newWizardCategory"> </addNewCategoryId> </extension> <!--extension point="org.eclipse.ui.activities"> <activity description="View Code Generation Option" id="com.csc.fs.pa.producteditor.application.viewCodeGen" name="ViewCodeGen"> </activity> <activityPatternBinding activityId="com.csc.fs.pa.producteditor.application.viewCodeGen" pattern="com.csc.fs.pa.bpd.vpms.codegen/com.csc.fs.pa.bpd.vpms.codegen.bpdCodeGenActionId"> </activityPatternBinding> Add New Product Definition Extension </extension--> </plugin> class="com.csc.fs.underwriting.product.editors.PDFViewer" extensions="pdf" icon="icons/pdficon_small.gif" id="com.csc.fs.pa.producteditor.application.editors.PDFViewer" name="PDF Viewer"> </editor> </extension> <extension point="org.eclipse.ui.views"> <category id="com.csc.fs.pa.application.viewCategory" name="%category"> </category> </extension> <extension point="org.eclipse.ui.newWizards"> <category id="com.csc.fs.pa.application.newWizardCategory" name="%category"> </category> <category id="com.csc.fs.pa.application.newWizardInitialize" name="%initialize" parentCategory="com.csc.fs.pa.application.newWizardCategory"> </category> </extension> <extension point="com.csc.fs.pa.common.usability.addNewCategory"> <addNewCategoryId id="com.csc.fs.pa.application.newWizardCategory"> </addNewCategoryId> </extension> <!--extension point="org.eclipse.ui.activities"> <activity description="View Code Generation Option" id="com.csc.fs.pa.producteditor.application.viewCodeGen" name="ViewCodeGen"> </activity> <activityPatternBinding activityId="com.csc.fs.pa.producteditor.application.viewCodeGen" pattern="com.csc.fs.pa.bpd.vpms.codegen/com.csc.fs.pa.bpd.vpms.codegen.bpdCodeGenActionId"> </activityPatternBinding> Add New Product Definition Extension </extension--> </plugin> Inside each class(the qualified classes in above xml) i did only hide and show the view according to perspective and that is working very fine.. Please provide any method that Eclipse provide so that I can override it in each classed so that it can work accordingly.

    Read the article

  • Learning frameworks without learning languages

    - by Tom Morris
    I've been reading up on GUI frameworks including WPF, GTK and Cocoa (UIKit). I don't really do anything related to Windows (I'm a Mac and Linux guy) or .NET, but I'd like to be able to throw together GUIs for various operating systems. We are in the enviable position now of having high level scripting languages that work with all of the major GUI toolkits. If you are doing Linux GUI programming, you could use GTK in C, but why not just use PyGTK (or PyQt). Similarly, for Java, one can use JRuby. For Mac, there's MacRuby. And on .NET, there's IronRuby. This is all fine and good, and if you are building a serious project, there are tradeoffs that you might encounter when deciding whether to, say, build a WPF app in C# or in IronRuby, or whether you are going to use PyGTK or not. The subjective question I have is: what about learning those frameworks? Are there strong reasons why one should or should not learn something like WPF or Cocoa in a language one is familiar with rather than having to learn a new language as well? I'm not saying you should never learn the language. If you are building Windows applications and you don't know C#, that might be a bit of a problem. But do you think it is okay to learn the framework first? This is both a general question and a specific question. I've used some Cocoa classes from Ruby and Python using things like PyObjC and there always seems to be an impedance mismatch because of the way Objective C libraries get built. Experiences and strong opinions welcome!

    Read the article

  • HPET for x86 BSP (how to build it for WCE8)

    - by Werner Willemsens
    Originally posted on: http://geekswithblogs.net/WernerWillemsens/archive/2014/08/02/157895.aspx"I needed a timer". That is how we started a few blogs ago our series about APIC and ACPI. Well, here it is. HPET (High Precision Event Timer) was introduced by Intel in early 2000 to: Replace old style Intel 8253 (1981!) and 8254 timers Support more accurate timers that could be used for multimedia purposes. Hence Microsoft and Intel sometimes refers to HPET as Multimedia timers. An HPET chip consists of a 64-bit up-counter (main counter) counting at a frequency of at least 10 MHz, and a set of (at least three, up to 256) comparators. These comparators are 32- or 64-bit wide. The HPET is discoverable via ACPI. The HPET circuit in recent Intel platforms is integrated into the SouthBridge chip (e.g. 82801) All HPET timers should support one-shot interrupt programming, while optionally they can support periodic interrupts. In most Intel SouthBridges I worked with, there are three HPET timers. TIMER0 supports both one-shot and periodic mode, while TIMER1 and TIMER2 are one-shot only. Each HPET timer can generate interrupts, both in old-style PIC mode and in APIC mode. However in PIC mode, interrupts cannot freely be chosen. Typically IRQ11 is available and cannot be shared with any other interrupt! Which makes the HPET in PIC mode virtually unusable. In APIC mode however more IRQs are available and can be shared with other interrupt generating devices. (Check the datasheet of your SouthBridge) Because of this higher level of freedom, I created the APIC BSP (see previous posts). The HPET driver code that I present you here uses this APIC mode. Hpet.reg [HKEY_LOCAL_MACHINE\Drivers\BuiltIn\Hpet] "Dll"="Hpet.dll" "Prefix"="HPT" "Order"=dword:10 "IsrDll"="giisr.dll" "IsrHandler"="ISRHandler" "Priority256"=dword:50 Because HPET does not reside on the PCI bus, but can be found through ACPI as a memory mapped device, you don't need to specify the "Class", "SubClass", "ProgIF" and other PCI related registry keys that you typically find for PCI devices. If a driver needs to run its internal thread(s) at a certain priority level, by convention in Windows CE you add the "Priority256" registry key. Through this key you can easily play with the driver's thread priority for better response and timer accuracy. See later. Hpet.cpp (Hpet.dll) This cpp file contains the complete HPET driver code. The file is part of a folder that you typically integrate in your BSP (\src\drivers\Hpet). It is written as sample (example) code, you most likely want to change this code to your specific needs. There are two sets of #define's that I use to control how the driver works. _TRIGGER_EVENT or _TRIGGER_SEMAPHORE: _TRIGGER_EVENT will let your driver trigger a Windows CE Event when the timer expires, _TRIGGER_SEMAPHORE will trigger a Windows CE counting Semaphore. The latter guarantees that no events get lost in case your application cannot always process the triggers fast enough. _TIMER0 or _TIMER2: both timers will trigger an event or semaphore periodically. _TIMER0 will use a periodic HPET timer interrupt, while _TIMER2 will reprogram a one-shot HPET timer after each interrupt. The one-shot approach is interesting if the frequency you wish to generate is not an even multiple of the HPET main counter frequency. The sample code uses an algorithm to generate a more correct frequency over a longer period (by reducing rounding errors). _TIMER1 is not used in the sample source code. HPT_Init() will locate the HPET I/O memory space, setup the HPET counter (_TIMER0 or _TIMER2) and install the Interrupt Service Thread (IST). Upon timer expiration, the IST will run and on its turn will generate a Windows CE Event or Semaphore. In case of _TIMER2 a new one-shot comparator value is calculated and set for the timer. The IRQ of the HPET timers are programmed to IRQ22, but you can choose typically from 20-23. The TIMERn_INT_ROUT_CAP bits in the TIMn_CONF register will tell you what IRQs you can choose from. HPT_IOControl() can be used to set a new HPET counter frequency (actually you configure the counter timeout value in microseconds), start and stop the timer, and request the current HPET counter value. The latter is interesting because the Windows CE QueryPerformanceCounter() and QueryPerformanceFrequency() APIs implement the same functionality, albeit based on other counter implementations. HpetDrvIst() contains the IST code. DWORD WINAPI HpetDrvIst(LPVOID lpArg) { psHpetDeviceContext pHwContext = (psHpetDeviceContext)lpArg; DWORD mainCount = READDWORD(pHwContext->g_hpet_va, GenCapIDReg + 4); // Main Counter Tick period (fempto sec 10E-15) DWORD i = 0; while (1) { WaitForSingleObject(pHwContext->g_isrEvent, INFINITE); #if defined(_TRIGGER_SEMAPHORE) LONG p = 0; BOOL b = ReleaseSemaphore(pHwContext->g_triggerEvent, 1, &p); #elif defined(_TRIGGER_EVENT) BOOL b = SetEvent(pHwContext->g_triggerEvent); #else #pragma error("Unknown TRIGGER") #endif #if defined(_TIMER0) DWORD currentCount = READDWORD(pHwContext->g_hpet_va, MainCounterReg); DWORD comparator = READDWORD(pHwContext->g_hpet_va, Tim0_ComparatorReg + 0); SETBIT(pHwContext->g_hpet_va, GenIntStaReg, 0); // clear interrupt on HPET level InterruptDone(pHwContext->g_sysIntr); // clear interrupt on OS level _LOGMSG(ZONE_INTERRUPT, (L"%s: HpetDrvIst 0 %06d %08X %08X", pHwContext->g_id, i++, currentCount, comparator)); #elif defined(_TIMER2) DWORD currentCount = READDWORD(pHwContext->g_hpet_va, MainCounterReg); DWORD previousComparator = READDWORD(pHwContext->g_hpet_va, Tim2_ComparatorReg + 0); pHwContext->g_counter2.QuadPart += pHwContext->g_comparator.QuadPart; // increment virtual counter (higher accuracy) DWORD comparator = (DWORD)(pHwContext->g_counter2.QuadPart >> 8); // "round" to real value WRITEDWORD(pHwContext->g_hpet_va, Tim2_ComparatorReg + 0, comparator); SETBIT(pHwContext->g_hpet_va, GenIntStaReg, 2); // clear interrupt on HPET level InterruptDone(pHwContext->g_sysIntr); // clear interrupt on OS level _LOGMSG(ZONE_INTERRUPT, (L"%s: HpetDrvIst 2 %06d %08X %08X (%08X)", pHwContext->g_id, i++, currentCount, comparator, comparator - previousComparator)); #else #pragma error("Unknown TIMER") #endif } return 1; } The following figure shows how the HPET hardware interrupt via ISR -> IST is translated in a Windows CE Event or Semaphore by the HPET driver. The Event or Semaphore can be used to trigger a Windows CE application. HpetTest.cpp (HpetTest.exe)This cpp file contains sample source how to use the HPET driver from an application. The file is part of a separate (smart device) VS2013 solution. It contains code to measure the generated Event/Semaphore times by means of GetSystemTime() and QueryPerformanceCounter() and QueryPerformanceFrequency() APIs. HPET evaluation If you scan the internet about HPET, you'll find many remarks about buggy HPET implementations and bad performance. Unfortunately that is true. I tested the HPET driver on an Intel ICH7M SBC (release date 2008). When a HPET timer expires on the ICH7M, an interrupt indeed is generated, but right after you clear the interrupt, a few more unwanted interrupts (too soon!) occur as well. I tested and debugged it for a loooong time, but I couldn't get it to work. I concluded ICH7M's HPET is buggy Intel hardware. I tested the HPET driver successfully on a more recent NM10 SBC (release date 2013). With the NM10 chipset however, I am not fully convinced about the timer's frequency accuracy. In the long run - on average - all is fine, but occasionally I experienced upto 20 microseconds delays (which were immediately compensated on the next interrupt). Of course, this was all measured by software, but I still experienced the occasional delay when both the HPET driver IST thread as the application thread ran at CeSetThreadPriority(1). If it is not the hardware, only the kernel can cause this delay. But Windows CE is an RTOS and I have never experienced such long delays with previous versions of Windows CE. I tested and developed this on WCE8, I am not heavily experienced with it yet. Internet forum threads however mention inaccurate HPET timer implementations as well. At this moment I haven't figured out what is going on here. Useful references: http://www.intel.com/content/dam/www/public/us/en/documents/technical-specifications/software-developers-hpet-spec-1-0a.pdf http://en.wikipedia.org/wiki/High_Precision_Event_Timer http://wiki.osdev.org/HPET Windows CE BSP source file package for HPET in MyBsp Note that this source code is "As Is". It is still under development and I cannot (and never will) guarantee the correctness of the code. Use it as a guide for your own HPET integration.

    Read the article

  • The Windows Azure Software Development Kit (SDK) and the Windows Azure Training Kit (WATK)

    - by BuckWoody
    Windows Azure is a platform that allows you to write software, run software, or use software that we've already written. We provide lots of resources to help you do that - many can be found right here in this blog series. There are two primary resources you can use, and it's important to understand what they are and what they do. The Windows Azure Software Development Kit (SDK) Actually, this isn't one resource. We have SDK's for multiple development environments, such as Visual Studio and also Eclipse, along with SDK's for iOS, Android and other environments. Windows Azure is a "back end", so almost any technology or front end system can use it to solve a problem. The SDK's are primarily for development. In the case of Visual Studio, you'll get a runtime environment for Windows Azure which allows you to develop, test and even run code all locally - you do not have to be connected to Windows Azure at all, until you're ready to deploy. You'll also get a few samples and codeblocks, along with all of the libraries you need to code with Windows Azure in .NET, PHP, Ruby, Java and more. The SDK is updated frequently, so check this location to find the latest for your environment and language - just click the bar that corresponds to what you want: http://www.windowsazure.com/en-us/develop/downloads/ The Windows Azure Training Kit (WATK) Whether you're writing code, using Windows Azure Virtual Machines (VM's) or working with Hadoop, you can use the WATK to get examples, code, PowerShell scripts, PowerPoint decks, training videos and much more. This should be your second download after the SDK. This is all of the training you need to get started, and even beyond. The WATK is updated frequently - and you can find the latest one here: http://www.windowsazure.com/en-us/develop/net/other-resources/training-kit/     There are many other resources - again, check the http://windowsazure.com site, the community newsletter (which introduces the latest features), and my blog for more.

    Read the article

  • IASA South East Florida Chapter February Meeting Report

    - by Rainer Habermann
    IASA South East Florida Chapter – February Meeting The topic for our February chapter meeting was Legal Issues in IT. Ms. Kennedy, Intellectual Property Attorney with an active litigation, trademark and copyright practice, presented: How Google, Wal-Mart & Apple Make their Millions – The Secret Ingredient: Intellectual Property This topic initiated great interest and the meeting room at Microsoft Ft. Lauderdale filled up to the last seat. Most Architects, Engineers, and MBA’s are not aware about Intellectual Property, Basic Patent, Trademark, or legal issues related to the web. After clarifying the basic definitions, Ms. Kennedy explained in detail how intellectual property issues could make or break a company. Members had the opportunity at the end of the presentation to ask questions, discuss legal problems, and several members shared their experiences related to Intellectual Property and other IT related issues. If you want to protect your ideas and intellectual property, you have to be aware of the implications and need to take the right steps in order to protect them. All Chapter Members agreed that it was an outstanding and lively presentation. Ms. Kennedy presented high quality content and made participants aware of legal IT issues. In the name of all chapter members, thank you Ms. Kennedy for taking the time for this amazing presentation and to Quent Herschelman for hosting the meeting. Rainer Habermann President IASA South East Florida Chapter

    Read the article

  • Surface RT: To Be Or Not To Be (Part 1)

    - by smehaffie
    So the Surface RT has been out for 9 months and Microsoft just declared a $900 million dollar write-down. So how did this happen and what does it mean for Microsoft’s efforts to break into the tablet market? I have been thinking a lot about most of the information below since the Surface product line was released. If you are looking for a “Microsoft Is Dead” story, then don’t read any further. But if you want an honest look at what I think led Microsoft to this point and what I think can be done to make Surface RT devices better, then please continue reading. What Led Microsoft To The $900 Million Write-Down Surface Unveiling:Microsoft totally missed the boat when they unveiled the Surface product line on June 18th, 2012. Microsoft should’ve been ready to post the specifications of both devices that night. Microsoft should’ve had a site up and running right after the event so people could pre-order the devices. This would have given them a good idea what the interest was in each device.  They could also have used this data to make a better estimate for the number of units to to have available for the launch and beyond.  They also lost out on taking advantage of the excitement generated by the Surface RT and Surface Pro announcement. They could have thrown in a free touch keyboard to anyone who pre-ordered. The advertising should have started right after the announcement and gotten bigger as launch day approached. Push for as many pre-order as possible and build excitement for the launch. Actual Launch (Surface RT): By this time all excitement was gone from the initial announcement, except for the Micorsoft faithful. Microsoft should have been ready to sell the Surface in as many markets as possible at launch. The limited market release was a real letdown for a lot of people.  A limited release right after the initial announce is understandable, but not at the official launch of the product. Microsoft overpriced the device and now they are lowering it to what it should have been to start with. The $349 price is within the range I suggested it should be at before pricing was announced. (Surface Tablets: The Price Must Be Right). Limited ordering options online was also a killer. User should have been able to buy the base unit of each device and then add on whatever keyboard they wanted to (this applies more to the Surface Pro).  There should have also been a place where users could order any additional add-ins that they wanted to buy (covers, extra power supplies, etc.) Marketing was better and the dancing “Click In” commercial was cool, but the ads comparing the iPad with Siri should have been on the air from day one of the announcement (or at least the launch).  Consumers want to know why you tablet is better, not just that is has a clickable keyboard and built-in kickstand. They could have also compared it to some of the other mid-range tablets if they had not overprices it to begin with. Stock Applications (Mail, People, Calendar, Music, Video, Reader and IE): This is where Microsoft really blew it. They had all the time in the world to make these applications the best of breed and instead we got applications that seemed thrown together.  Some updates have made these application better, but they are all still lacking in features that should have been there from day one. This did not help to enhance a new users experience any. ** I will admit that the applications that were data driven were first class citizen’s and that makes it even more perplexing why MS could knock it out of the park with the Weather, Travel, Finance, Bing, etc.) and fail so miserably on the core applications users would use the most on a tablet. Desktop on Tablet: The desktop just is so out of place on the tablet  I understand it was needed for Office but think it would have been better to not have the desktop in Windows RT, but instead open up the Office applications in full screen mode, in a desktop shell (same goes for  IE11).That way the user wouldn’t realize they are leaving Metro and going to the desktop. The other option would have been to just not include Office on Windows RT devices. Instead they could have made awesome Widows Store Apps for Word, Excel, OneNote and PowerPoint. In addition, they could have made the stock Mail, People, and Calendar applications contain all the functions that Outlook gives desktop users. Having some of the settings in desktop mode and others under “Change PC Settings” made Windows RT seemed unfinished and rushed to market. What Can Be Done To Make Windows RT Based Tablets Better (At least in my opinion) Either eliminate the desktop all together from Windows RT or at least make the user experience better by hiding the fact the user is running Office/IE in the desktop. Personally I ‘d like them to totally get rid of it and just make awesome Windows Store Application version of Word, Excel PowerPoint & OneNote.  This might also make the OS smaller and give the user more available disk space. I doubt there will ever be a Windows Store App versions of Office, but I still think it is a good idea. Make is so users can easily direct their documents, picture, videos and music to their extra storage and can access these files from the standard libraries.  A user should not have to create a VM on their microSD card or create symbolic links to get this to work properly. Most consumers would not be able to do this. Then users get frustrated when they run out or room on their main storage because nothing is automatically save to their microSD card when saved to libraries.  This is a major bug that needs to be fixed, otherwise Microsoft’s selling point of having a microSD slot is worthless. Allows users to uninstall and re-install any of the Office product that come with the Surface. That way people can free up storage space by uninstalling the Office applications they do not need. Everyone’s needs are different, so make the options flexible. Don’t take up storage space for applications the user will not use. Make the Core applications the “Cream of the Crop” Windows App Store applications. The should set the bar for all other Store applications. Improve performance as much as possible, if it seems to be sluggish on a tablet consumer will not buy it. They need to price the next line of Surface product very aggressive to undercut not only iPad but also Android low end tablets (Nook, Kindle Fire, and Nexus, etc.) Give developers incentives to write quality applications for the devices. Don’t reward developers for cranking out cookie cutter, low quality applications. I’d even suggest Microsoft consider implementing some new store certification guideline to stop these type of applications being published. Allow users to easily move the recover disk “partition between their microSD card and main storage. My Predictions for the Surface RT and Windows RT I honestly think even with all the missteps MS has made since the announcement  about the Surface product line, that they are on the right path. I was excited the Surface tablets when they were announced, and I still am. The truth be told, Windows 8 on a tablet (aka: Windows RT) is better than both iOS and Android. My nephew who is an Apple fan boy told me after he saw and used Windows 8 (he got the beta running on his iPad), that Windows 8 kicked Apples butt as a tablet OS. So there is hope for all Windows RT based tablets. I agree with my nephew and that is why whenever anyone asks me about my Surface, I love showing it off and recommend it. The 6 keys to gaining market share in the tablet market are; Aggressive pricing by both Microsoft and their OEM’s Good quality devices put out by Microsoft and their OEM’s (there are some out there, but not enough) Marketing, Marketing, Marketing from both Microsoft and their OEM’s (Need more ads showing why windows based tablets are better than iPads and Android tablets) Getting Widows tablets in retails stores all over, and giving sales people incentive to sell them. Consumers like to try electronics out before they buy them, and most will listen to what the sales person suggest. Microsoft needs sales people in retail stores directing people to buy windows based tablets over iPads and Android tablets. I think the Microsoft Stores within Best Buy is a good start, but they also need to get prominent displays in Walmart, Target, etc.. Release a smaller form factor Surface, Hopefully the 8”-10” next generation Surface is not a rumor. Make “Surface” the brand name for all Microsoft tablets and hybrid devices that they come out with. They cannot change the name with each new release.  Make Surface synonymous with quality, the same way that iPad  is for Apple. Well, that is my 2 cents on the subject. Let me know your thoughts by leaving a comment below. Soon to follow will be my thought on the Surface Pro, so keep an eye out for it. var addthis_pub="smehaffie"; var addthis_options="email, print, digg, slashdot, delicious, twitter, live, myspace, facebook, google, stumbleupon, newsvine";

    Read the article

  • What are the options for simple Ajax calls for a Java webapp?

    - by Cedric Martin
    I've got a very simple need and I don't know what are the options available. If I simplify, users see webpage like this server by a Java webapp server: [-] red [x] green [-] blue [-] yellow The selected color is green And then I want the user to be able to select the yellow color and have the part of the page containing the relevant text change to: [-] red [-] green [-] blue [x] yellow The selected color is yellow Basically I want something a bit more user friendly than simply using HTTP GET all the time. There shall be a lot of options the user can select from and this shall affect an (HTML formatted) text displayed on the page. And I want the user to see his change as soon as possible, without having the page to fully reload and without being redirected to another page. There shall be a client/server round-trip (the information to display depending on the options selected ain't available on the client-side so I cannot do it all in JavaScript in the browser). I'd like to use Ajax requests but I don't know which way to go: jQuery GWT something else What are my options and what would be the pros and cons of the various approach? P.S: I'm very familiar with Java (SCP since the last century and basically being a Java programmer for the last 12 years or so) but not familiar at all with JavaScript (though I did hack a few Ajaxy-calls years ago, way before great libraries existed).

    Read the article

  • Is 1GB RAM with integrated graphics sufficient for Unity 3D on 12.04?

    - by Anwar Shah
    I have been using Ubuntu since Hardy Heron (8.04). I used Natty, Oneiric with Unity. But When I recently (more than 1 month now) upgraded My Ubuntu to Precise (12.04), the performance of my laptop is not satisfactory. It is too unresponsive compared to older releases. For example, the Unity in 12.04 is very unresponsive. Sometimes, it requires 2 seconds to show up the dash (which was not the case with Natty, though people always saying that Natty's version of Unity is buggiest). I am assuming that, May be my 1GB RAM now becomes too low to run Unity of Precise. But I also think, Since Unity is improved in Precise, It may not be the case. So, I am not sure. Do you have any ideas? Will upgrading RAM fix it? How much I need if upgrade is required? Laptop model: "Lenovo 3000 Y410" Graphic : "Intel GMA X3100" on Intel 965GM Chipset. RAM/Memory : "1 GB DDR2" (1 slot empty). Swap space : 1.1GB Resolution: 1280x800 widescreen Shared RAM for Graphics: 256 MB as below output suggests $ dmesg | grep AGP [ 0.825548] agpgart-intel 0000:00:00.0: AGP aperture is 256M @ 0xd0000000

    Read the article

  • Google+ Platform Office Hours for March 21, 2012: JavaScript and the REST APIs

    Google+ Platform Office Hours for March 21, 2012: JavaScript and the REST APIs It's a blast from the past. Here's the video from our office hours held on the 20th of March. In this session Jonathan and Wolff guided you through using the REST APIs with JavaScript. Get the source code: goo.gl Discuss this video on Google+: goo.gl 1:05 - How to use JSONP to access the REST APIs in JavaScript 2:30 - Setting up a new project in the API console 7:06 - The client libraries, what are they? 8:39 - Using the JavaScript client library to reimplement our example 13:27 - About OAuth and private resources 14:26 - Creating an OAuth client using the API console - The JavaScript client library discussion group - goo.gl 24:14 - Using the JavaScript client library and REST APIs from within a hangout - hangoutbots.blogspot.com Q&A 19 - Are you planning to change the +1 button back? 20:14 - Will this video be posted to YouTube? (Spoiler: the answer is yes) 20:43 - Do you read your issues list? - The Google+ platform issue tracker: goo.gl From: GoogleDevelopers Views: 2808 33 ratings Time: 27:03 More in Science & Technology

    Read the article

  • How do I set up live audio streams to a DLNA compliant device?

    - by Takkat
    Is there a way to stream the live output of the soundcard from our 12.04.1 LTS amd64 desktop to a DLNA-compliant external device in our network? Selecting media content in shared directories using Rygel, miniDLNA, and uShare is always fine - but so far we completely failed to get a live audio stream to a client via DLNA. Pulseaudio claims to have a DLNA/UPnP media server that together with Rygel is supposed to do just this. But we were unable to get it running. We followed the steps outlined in live.gnome.org, this answer here, and also in another similar guide. As soon as we select the local audio device, or our GST-Launch stream in the DLNA client Rygel displays the following message and the client states it reached the end of the playlist: (rygel:7380): Rygel-WARNING **: rygel-http-request.vala:97: Invalid seek request This is how we configured GST-Launch in rygel.conf: [GstLaunch] enabled=true launch-items=mypulseaudiosink mypulseaudiosink-title=Audio on @HOSTNAME@ mypulseaudiosink-mime=audio/x-wav mypulseaudiosink-launch=pulsesrc device=<device> ! wavpackenc For <device> we tried with the default sink name, this name appended with .monitor, and in addition with upnp-sink and upnp.monitor that was created when we selected DLNA media server from paprefs. We also tried to encode using lamemp3enc with no luck. These are our pulseaudio modules: http://paste.ubuntu.com/1202913/ These are our sinks: http://paste.ubuntu.com/1202916/ Did we miss any other additional configuration needed to get this running? Are there any other alternatives for sending the audio of our soundcard as live stream to a DLNA client?

    Read the article

  • The Open Data Protocol

    - by Bobby Diaz
    Well, day 2 of the MIX10 conference did not disappoint.  The keynote speakers introduced the preview release of IE9, which looks really cool and quick, and Visual Studio 2010 RC that is scheduled to RTM on April 12th.  It seemed to have a lot of improvements aimed at making developers more productive.  Here are the current links to these two offerings: Internet Explorer 9 – Platform Preview Visual Studio 2010 and .NET 4 – Release Candidate While both of these were interesting, the demos that really blew me away today centered around the work being done with The Open Data Protocol, or OData for short!  OData is a recommended standard being pushed by Microsoft that uses a REST based interface to interact with various types of data in a uniform manner.  Data producers then provide the data to consumer in either ATOM or JSON formats as requested by the client application. The OData SDK contains client and server libraries for many of the popular languages in use today, including .NET, Java, PHP, Objective C and JavaScript, so you consume or even produce your own OData services.  More information can be found using the following links: OData.org How to navigate an OData compliant service Query Functions (WCF Data Services) Netflix has made available one of the first live OData services by exposing their entire movie catalog.  You can browse and query using URLs similar to the following: http://odata.netflix.com/ http://odata.netflix.com/Catalog/Genres('Horror')/CatalogTitles http://odata.netflix.com/Catalog/CatalogTitles?$filter=startswith(Title/Regular,%20'Star%20Wars')&$orderby=Title/Regular So now I just need to find an excuse reason to start using OData in a real project! Enjoy!

    Read the article

< Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >