Search Results

Search found 190 results on 8 pages for 'todd moses'.

Page 2/8 | < Previous Page | 1 2 3 4 5 6 7 8  | Next Page >

  • Tuxedo 11gR1 Client Server Affinity

    - by todd.little
    One of the major new features in Oracle Tuxedo 11gR1 is the ability to define an affinity between clients and servers. In previous releases of Tuxedo, the only way to ensure that multiple requests from a client went to the same server was to establish a conversation with tpconnect() and then use tpsend() and tprecv(). Although this works it has some drawbacks. First for single-threaded servers, the server is tied up for the entire duration of the conversation and cannot service other clients, an obvious scalability issue. I believe the more significant drawback is that the application programmer has to switch from the simple request/response model provided by tpcall() to the half duplex tpsend() and tprecv() calls used with conversations. Switching between the two typically requires a fair amount of redesign and recoding. The Client Server Affinity feature in Tuxedo 11gR1 allows by way of configuration an application to define affinities that can exist between clients and servers. This is done in the *SERVICES section of the UBBCONFIG file. Using new parameters for services defined in the *SERVICES section, customers can determine when an affinity session is created or deleted, the scope of the affinity, and whether requests can be routed outside the affinity scope. The AFFINITYSCOPE parameter can be MACHINE, GROUP, or SERVER, meaning that while the affinity session is in place, all requests from the client will be routed to the same MACHINE, GROUP, or SERVER. The creation and deletion of affinity is defined by the SESSIONROLE parameter and a service can be defined as either BEGIN, END, or NONE, where BEGIN starts an affinity session, END deletes the affinity session, and NONE does not impact the affinity session. Finally customers can define how strictly they want the affinity scope adhered to using the AFFINITYSTRICT parameter. If set to MANDATORY, all requests made during an affinity session will be routed to a server in the affinity scope. Thus if the affinity scope is SERVER, all subsequent tpcall() requests will be sent to the same server the affinity scope was established with. If the server doesn't offer that service, even though other servers do offer the service, the call will fail with TPNOENT. Setting AFFINITYSTRICT to PRECEDENT tells Tuxedo to try and route the request to a server in the affinity scope, but if that's not possible, then Tuxedo can try to route the request to servers out of scope. All of this begs the question, why? Why have this feature? There many uses for this capability, but the most common is when there is state that is maintained in a server, group of servers, or in a machine and subsequent requests from a client must be routed to where that state is maintained. This might be something as simple as a database cursor maintained by a server on behalf of a client. Alternatively it might be that the server has a connection to an external system and subsequent requests need to go back to the server that has that connection. A more sophisticated case is where a group of servers maintains some sort of cache in shared memory and subsequent requests need to be routed to where the cache is maintained. Although this last case might be able to be handled by data dependent routing, using client server affinity allows the cache to be partitioned dynamically instead of statically.

    Read the article

  • Tuxedo 11gR1 Released

    - by todd.little
    I've been a little quiet the last several months as the Tuxedo team has been very busy. Today Oracle announced the 11gR1 release of the Tuxedo product family. This release includes updates to Tuxedo, TSAM, and SALT, as well as 3 new products that Oracle is announcing today. These 3 new products are the Oracle Tuxedo Application Runtime for CICS and Batch, Oracle Application Rehosting Workbench, and the Tuxedo JCA Adapter. By providing a CICS equivalent runtime and a rehosting workbench to automate the rehosting of COBOL CICS code, JCL procedures, data definitions, and data, Oracle has significantly lowered the effort and risk to rehost mainframe CICS and Batch applications onto the Tuxedo runtime on open systems. By moving off proprietary legacy mainframes, customers have experienced better performance and achieved a 50-80% lowering of their total cost of ownership. The rehosting tools allow the COBOL business logic to remain unchanged and automate the replacement of CICS statements with calls to Tuxedo. The rehosted code can then run on open systems 'as-is'. Users can still use the same TN3270 interfaces they are used to eliminating the need for retraining. Batch procedures can be run and managed under a JES2 like environment. For the first time, customers have the tools and enterprise class runtime environment to move their key legacy assets off the mainframe and on to distributed open systems whether the application uses 250 MIPS, 25,000 MIPS, or more. More on these exciting new options in additional blog entries.

    Read the article

  • TSAM 11gR1

    - by todd.little
    The Tuxedo System and Application Monitor (TSAM) 11gR1 release provides powerful new application monitoring capabilities, as well as significant improvements in ease of use. The first thing users will notice is the completely redesigned user interface in the TSAM console. Based on Oracle ADF, the console is much easier to navigate, provides a Web 2.0 style interface with dynamically updating panels, and a look and feel familiar to those that have used Oracle Enterprise Manager. Monitoring data can be viewed in both tabular and graphical form and exported to Excel for further analysis. A number of new metrics are collected and displayed in this release. Call path monitoring now displays CPU time, message size, total transport time, and client address giving even more end-to-end information about a specific Tuxedo request. As well the call path display has been completely revamped to make it much easier to see the branches of the call path. The call pattern display now provides statistics on successful vs failed calls, system and application failures, and end-to-end average elapsed time. Service monitoring now displays minimum and maximum message size, CPU usage, and client address. System server monitoring now includes monitoring the SALT gateway servers to provide detailed performance metrics about those servers. Perhaps the most significant new feature is the consolidation of alert definitions and policy management. In previous versions of TSAM, some alerts were defined and checked on the monitored systems while others were defined and checked in the console. Policy management could be performed on both the monitored node via environment variable or command, as well as from the console. Now all alert definitions and policy definitions are only made using the console. For alerts this means that regardless of where the alert is evaluated it is defined in one and only one place. Thus the plug-in alert mechanism of previous releases can now be managed using the TSAM console, making SLA alert definition much easier and cleaner. Finally there is support in TSAM for monitoring rehosted mainframe applications. The newly announced Oracle Tuxedo Application Runtime for CICS and Batch can be monitored in the TSAM console using traditional mainframe views of the application such as regions. Look for a future blog entry with more details on this as well as some entries providing a glimpse of the console. TSAM gives users a single point for monitoring the performance of all of their Tuxedo applications.

    Read the article

  • Webcast: The ART of Migrating and Modernizing IBM Mainframe Applications

    - by todd.little
    Tuxedo provides an excellent platform to migrate mainframe applications to distributed systems. As the only distributed transaction processing monitor that offers quality of service comparable or better than mainframe systems, Tuxedo allows customers to migrate their existing mainframe based applications to a platform with a much lower total cost of ownership. Please join us on Thursday April 29 at 10:00am Pacific Time for this exciting webcast covering the new Oracle Tuxedo Application Runtime for CICS and Batch 11g. Find out how easy it is to migrate your CICS and mainframe batch applications to Tuxedo.

    Read the article

  • Effective versus efficient code

    - by Todd Williamson
    TL;DR: Quick and dirty code, or "correct" (insert your definition of this term) code? There is often a tension between "efficient" and "effective" in software development. "Efficient" often means code that is "correct" from the point of view of adhering to standards, using widely-accepted patterns/approaches for structures, regardless of project size, budget, etc. "Effective" is not about being "right", but about getting things done. This often results in code that falls outside the bounds of commonly accepted "correct" standards, usage, etc. Usually the people paying for the development effort have dictated ahead of time what it is that they value more. An organization that lives in a technical space will tend towards the efficient end, others will tend towards the effective. Developers often refuse to compromise their favored approach for the other. In my own experience I have found that people with formal education in software development tend towards the Efficient camp. Those that picked up software development more or less as a tool to get things done tend towards the Effective camp. These camps don't get along very well. When managing a team of developers who are not all in one camp it is challenging. In your own experience, which camp do you land in, and do you find yourself having to justify your approach to others? To management? To other developers?

    Read the article

  • Oracle Key Vault Sneak Peek at NYOUG

    - by Troy Kitch
    The New York Oracle Users Group will get a sneak peek of Oracle Key Vault on Tuesday, June 3, by Todd Bottger, Senior Principal Product Manager, Oracle. If you recall, Oracle Key Vault made its first appearance at last year's Oracle OpenWorld in San Francisco within the session "Introducing Oracle Key Vault: Enterprise Database Encryption Key Management." You can catch Todd's talk from 9:30 to 10:30 am. Session Abstract With many global regulations calling for data encryption, centralized and secure key management has become a need for most organizations. This session introduces Oracle Key Vault for centrally managing encryption keys, wallets, and passwords for databases and other enterprise servers. Oracle Key Vault enables large-scale deployments of Oracle Advanced Security’s Transparent Data Encryption feature and secure sharing of keys between Oracle Real Application Clusters (Oracle RAC), Oracle Active Data Guard, and Oracle GoldenGate deployments. With support for industry standards such as OASIS KMIP and PKCS #11, Oracle Key Vault can centrally manage keys and passwords for other endpoints in your organization and provide greater reliability, availability, and security. 

    Read the article

  • Two Weeks As A Software Estimation Rule of Thumb?

    - by Todd Williamson
    I saw a blog posting that spoke to me: http://james-iry.blogspot.com/2010/10/how-to-estimate-software.html Oddly, this is the kind of estimate that I tend to do on smaller projects. Just about everything is "two weeks" as that is comfortably far enough out. I once had an instructor walk us through how to create a more detailed estimate, wherein we already had the requirements up front, etc. and even after all the careful tabulation and such the final instruction was "Now that you have all this documentation go ahead and double it." Agile practitioners seem to like two weeks also as a sprint length. Is there something magical about two weeks? Is it a hrair number for our psyches or some other kind of crutch? Do you have an immediate default fall-back schedule strategy when you are pressed for an initial delivery date?

    Read the article

  • "The daemon is being inhibited" error message when mounting volumes on a partitioned external HD [closed]

    - by Todd
    I'm having a great deal of difficulty with an external hard drive. I'm currently running a dual boot system (XP Service Pack 3 and Ubuntu 11.04 Natty Narwahl) on a Dell Inspiron B120. I'm trying to set up a new 80 GB Hitachi external HD. Using GParted, I formatted the drive and set up the partitions. The partitioning scheme is as follows 10GB NTFS Primary, 2GB Linux-Swap Primary, 50GB FAT32 Primary, 12GB Unallocated. After applying those changes, I went into Disk Utility and the HD appears along with the correct partitions. When I try to mount the volumes for partitions 1 and 3, I get a pop-up stating: Error Mounting Volume An error occurred while performing an operation on "Home" (Partition 3 of HTS548080m9AT00): The daemon is being inhibited. When I try to to check the filesystem I get a pop-up stating: Error Checking filesystem on volume An error occurred while performing an operation on "Home" (Partition 3 of HTS548080m9AT00): The daemon is being inhibited. Throughout the time that I'm attempting to troubleshoot the problem, the external drive light is on and blinking. With my frustration hitting a boiling point, I try to shut down the drive and remove it so that I can plug in a different external HD that works PERFECTLY. However, when I try to shut down and safely remove the drive, I get a pop-up stating: Error Detaching Drive An error occurred while performing an operation on "80GB Hard Disk" (HTS548080m9AT00): The daemon is being inhibited. Can anyone tell me what I'm doing wrong? I'm a newbie and not that skilled with terminal commands, so please dumb it down for me if you request specific command output.

    Read the article

  • Difficulty Mounting Volumes on a Partitioned External HD

    - by Todd
    I'm having a great deal of difficulty with an external hard drive. I'm currently running a dual boot system (XP Service Pack 3 and Ubuntu 11.04 Natty Narwahl) on a Dell Inspiron B120. I'm trying to set up a new 80 GB Hitachi external HD. Using GParted, I formatted the drive and set up the partitions. The partitioning scheme is as follows 10GB NTFS Primary, 2GB Linux-Swap Primary, 50GB FAT32 Primary, 12GB Unallocated. After applying those changes, I went into Disk Utility and the HD appears along with the correct partitions. When I try to mount the volumes for partitions 1 and 3, I get a pop-up stating: Error Mounting Volume An error occurred while performing an operation on "Home" (Partition 3 of HTS548080m9AT00): The daemon is being inhibited. When I try to to check the filesystem I get a pop-up stating: Error Checking filesystem on volume An error occurred while performing an operation on "Home" (Partition 3 of HTS548080m9AT00): The daemon is being inhibited. Throughout the time that I'm attempting to troubleshoot the problem, the external drive light is on and blinking. With my frustration hitting a boiling point, I try to shut down the drive and remove it so that I can plug in a different external HD that works PERFECTLY. However, when I try to shut down and safely remove the drive, I get a pop-up stating: Error Detaching Drive An error occurred while performing an operation on "80GB Hard Disk" (HTS548080m9AT00): The daemon is being inhibited. Can anyone tell me what I'm doing wrong? I'm a newbie and not that skilled with terminal commands, so please dumb it down for me if you request specific command output.

    Read the article

  • State of Texas delivers Private Cloud Services powered by Oracle Technology

    - by Anand Akela
    State of Texas moved to private cloud infrastructure and delivering Infrastructure as a Service , Database as a Service and other Platform as a Service offerings to their 28 state agencies. Todd Kimbriel, Director of eGovernment Division at State of Texas attended Oracle Open World and talked with Oracle's John Foley about their private cloud services offering. Later, Todd participated in the keynote panel of Database as a Service Online Forum> along with Carl Olofson,IDC analyst , Juan Loaiza,SVP Oracle and couple of other Oracle customers. He discussed the IT challenges of  government organizations like state of Texas and the benefits of transitioning to Private cloud including database as a service .

    Read the article

  • Web Development Internship Interview Help

    - by Todd
    Tomorrow morning I'm interviewing for a web development internship position and I'm seeking general advice pertaining to the questions that interviewers might ask, some questions I should ask them during the interview, and any general tips/suggestions that might help. They're looking for someone with knowledge mainly in HTML/CSS, Joomla, MySQL and PHP, all of which I have excluding Joomla (which I'm installing/doing as much research on as possible at the moment) and I was able to provide them a link to a site I'd been paid to build for a small business which they mentioned they were impressed with. I'd like to prepare myself for the interview as much as I possibly can but I'm wondering how much time I should spend rehashing elements of the languages they're looking for, or if it'd be a better use of my time to research their company and figure out how I'd respond to general questions. I feel that perhaps because I showed them a project I've completed that they'll know that I can grasp the technical side of what they're going for... but this is my first internship interview in this field so I'm not exactly sure what to expect. Thanks

    Read the article

  • How are design-by-contract and property-based testing (QuickCheck) related?

    - by Todd Owen
    Is their only similarity the fact that they are not xUnit (or more precisely, not based on enumerating specific test cases), or is it deeper than that? Property-based testing (using QuickCheck, ScalaCheck, etc) seem well-suited to a functional programming style where side-effects are avoided. On the other hand, Design by Contract (as implemented in Eiffel) is more suited to OOP languages: you can express post-conditions about the effects of methods, not just their return values. But both of them involve testing assertions that are true in general (rather than assertions that should be true for a specific test case). And both can be tested using randomly generated inputs (with QuickCheck this is the only way, whereas with Eiffel I believe it is an optional feature of the AutoTest tool). Is there an umbrella term to encompass both approaches? Or am I imagining a relationship that doesn't really exist.

    Read the article

  • Not your typical Permission Denied Question

    - by Todd
    I recently reinstalled Ubuntu 11.10 (64 Bit) on my computer. (My hard drive took a powder) Before, I could "mv" files around with the command. Now when I try I get the permission denied message. I also get the message about "man sudo" when I open my terminal. I am pretty sure I did not get that before. Can I add a user/administrator and change something in my orginal admin that I cannot change myself? I am getting really frustrated with this. I do not recall having the same problem before. I tried qksudo nautilus and it appears to run then it sits there with the cursor blinking but does not move.

    Read the article

  • can't see second hard drive

    - by Todd K
    I had XP Pro installed, and then I installed 12.04 dual boot. System has 2 hard drives, c: (as in windows) is primarily OS stuff, and then E: is data/personal documents. XP can still see both drives as normal after the ubuntu install, but ubuntu can only see the windows C: drive. How can I access the second drive? I can't see any trace of it, using the little *nix know-how I have, which isn't much...any help is great! Thanks!

    Read the article

  • Attaching Events to Document Better Than Attaching Them to Elements?

    - by Todd
    While bouncing around StackOverflow, I've noticed a number of people attaching events (notably click events) to the document as opposed to the elements themselves. Example: Given this: <button id="myButton">CLICK ME</button> Instead of writing this (using jQuery just for brevity): $('#myButton').on('click', function() { ... }); They do this: $(document).on('click', function() { ... }); And then presumably use event.target to drill down to the element that was actually clicked. Are there any gains/advantages in capturing events at the document level instead of at the element level?

    Read the article

  • copy entire row (without knowing field names)

    - by Todd Webb
    Using SQL Server 2008, I would like to duplicate one row of a table, without knowing the field names. My key issue: as the table grows and mutates over time, I would like this copy-script to keep working, without me having to write out 30+ ever-changing fields, ugh. Also at issue, of course, is IDENTITY fields cannot be copied. My code below does work, but I wonder if there's a more appropriate method than my thrown-together text string SQL statement? So thank you in advance. Here's my (yes, working) code - I welcome suggestions on improving it. Todd alter procedure spEventCopy @EventID int as begin -- VARS... declare @SQL varchar(8000) -- LIST ALL FIELDS (*EXCLUDE* IDENTITY FIELDS). -- USE [BRACKETS] FOR ANY SILLY FIELD-NAMES WITH SPACES, OR RESERVED WORDS... select @SQL = coalesce(@SQL + ', ', '') + '[' + column_name + ']' from INFORMATION_SCHEMA.COLUMNS where TABLE_NAME = 'EventsTable' and COLUMNPROPERTY(OBJECT_ID('EventsTable'), COLUMN_NAME, 'IsIdentity') = 0 -- FINISH SQL COPY STATEMENT... set @SQL = 'insert into EventsTable ' + ' select ' + @SQL + ' from EventsTable ' + ' where EventID = ' + ltrim(str(@EventID)) -- COPY ROW... exec(@SQL) -- REMEMBER NEW ID... set @EventID = @@IDENTITY -- (do other stuff here) -- DONE... -- JUST FOR KICKS, RETURN THE SQL STATEMENT SO I CAN REVIEW IT IF I WISH... select EventID = @EventID, SQL = @SQL end

    Read the article

  • Not recognizing second monitor after hibernate (Windows 7, Dell D630 laptop)

    - by Brooks Moses
    I have a Dell Latitude D630 laptop which I've recently updated to Windows 7 64-bit. (The Dell site confirms that it's Windows-7-compatible.) Normally it lives in a docking station with a second monitor connected to the DVI port on the docking station, and I use the second monitor in a multi-monitor configuration with the laptop screen. Sometimes I undock the laptop and use it separately. Here's the problem: If I hibernate the laptop while undocked, and then power it back up in the docking station, it does not recognize the second monitor. By which I mean that not only does it not share the desktop onto the second monitor, but if I go into the control panel for display settings and press "Detect", it does not even detect the existence of the second monitor. I can tell it to "use the VGA port anyway" for a second monitor, but the monitor is connected to a DVI port on the docking station, so that doesn't do anything useful. If I entirely reboot the laptop while it's connected to the docking station, it has no problem recognizing the second monitor and using it. But then, if I hibernate, undock, de-hibernate while undocked and rehibernate, and then re-dock and de-hibernate, it's back to not recognizing the second monitor again. I'm reasonably certain that this is not a limitation of the hardware; this worked fine on Windows XP. I'm currently using the Windows 7 driver for my video card. I attempted to use the video driver from the Dell website for this laptop, but Dell only provides Vista 64-bit drivers, not Windows 7 64-bit drivers. Their "Windows 7 compatibility" page suggests that the Vista drivers should work, but when I attempted to install the driver, it gave me a "this operating system not supported" error and refused to install. Any further ideas?

    Read the article

  • Disable Windows Media Player "media server" network locations

    - by Moses
    I'm running Windows 8 and in the Computer menu, I see a huge list of "media server" network locations of many of the PCs in my network (most running Windows 7). Is there a way to either locally disable this so I don't see this list every time, or disable this sharing feature on the other computers? I've tried disabling "Media Streaming options" from the Network and Sharing Center (on my PC), but that had no effect. Another thing I tried was enabling Media Streaming, but then selecting all the found clients and clicking Blocked in the list of found clients. That had no effect in removing the list either. I've also attempted disabling the Windows Media Player Network Sharing Service, but alas, the list remains. I'm starting to believe there's a magic registry key to unbury and flip to a "1", but all the searching I've done has come up empty.

    Read the article

  • Changing order of Thunderbird email address autocomplete?

    - by Brooks Moses
    I recently did a system wipe and installed Thunderbird 3.0, and imported all of my email setup from a previous Thunderbird 2.0 installation. Almost everything is working fine, but I'm having a problem with the autocomplete in email addresses when writing messages. The relevant behavior is this: In the old 2.0 installation, the autocomplete appeared to know which email addresses I used most frequently, and so when I typed "m" in the address line, it would pick as the default selection the "m"-person who I frequently write email to. (It's possible this is an illusion and it simply picked people in the order I added them to my address book.) Thus, I have become used to typing "m"-"enter" in the address field, and getting this person. In the current 3.0 installation, however, the autocomplete order has changed. It's not the same as it was, and it's not alphabetical. The result is that I'm spending extra time looking at the email address bar, and more annoyingly, half the time the old muscle-memory kicks in and I find myself with an email that's addressed to a couple of customers rather than to my boss and coworker. Thus, two questions: How does Thunderbird determine this autocomplete order, among a set of addresses all of which are in the same address book? How can I change this ordering to be what I want? (I have tried Google-searching, and found a number of incomplete answers, nearly all of which were for version 1.0 or thereabouts, and reference settings dialog boxes that no longer exist.)

    Read the article

  • How to get a good current VMWare browser appliance?

    - by Brooks Moses
    I'd like to have a small VMWare virtual machine that runs a copy of Firefox, with Flash enabled. (Or some equivalently-capable browser.) I tried doing some Google searching with no luck finding good keywords, and tried looking through VMWare's "marketplace" of VMs, but all I found were things from 2006 or so. Is there a reasonably easy way to get a current one? Ideally, I'd like to just download one somewhere, but in the alternative, a quick how-to guide would be useful. I know I could go through the whole process of getting a full-Linux-install VM and setting things up, but that seems like quite a lot of trouble and ends up with a pretty heavyweight solution to the problem, so I'm hoping there's a simpler way.

    Read the article

  • How to embed Arial in PDF when PDF has Helvetica?

    - by Brooks Moses
    So, I've got a PDF file that's generated by a program that uses the Base 14 fonts, so that it contains "Helvetica" and "Times Roman". When I look at that in my copy of Acrobat 7.0 on Windows (for example), it shows these with Arial and Times New Roman. I'm fine with that. The issue is that I'd like to publish this PDF file on lulu.com, and they want all fonts embedded. Including the Base 14. I don't have a copy of Helvetica, so what seems the natural thing to do is substitute Arial for Helvetica and embed Arial. How can I do that? I tried using the Print feature in Acrobat (note: this is the full version, not Reader) to print to a PDF file using Adobe's "Print to PDF" printer driver, and selected the "Embed All Fonts" option in the print settings. This worked for the fonts that I had actual copies of, but instead of "printing" Arial for Helvetica -- which it would do if printing to a real printer -- it leaves all the Helvetica as Helvetica and doesn't embed it. Any suggestions for alternate ways to do this? What I really want is just a copy of my PDF file with ALL fonts embedded, and I'm quite happy if doing that means making one of the usual substitutions for the "Helvetica" that's in it. I'd be happiest if I can do that within Acrobat or other software that I have (pdftex, maybe?), but I'm willing to install another free utility if I need to.

    Read the article

  • Domain clients can't reach website with same name as domain

    - by Moses
    I know this is a very basic question but I need some help. I'm setting up a domain controller on Zentyal with the domain name example.com. But I need the domain users to be able to get to our company website with the same name (http://example.com) that's hosted out there on a third party's server. I know this has something to do with adding a DNS record, but I don't know what type. I would experiment, but I don't want to break the whole works!

    Read the article

  • Web page layout becomes broken when moved to live.

    - by Moses
    I want to preface my question with the fact that I'm only a front-end web developer, so please excuse my gross lack of knowledge in this area. My company has three webservers: one for development (IIS v5), one for staging (IIS v5), and one for live deployment (IIS v6). Staging is an exact mirror of live. When I compare the staging and live web pages side by side in Firefox (3.6), the pages are identical. However, when I compare the staging and live pages with Internet Explorer (8), there are major differences... In staging, the squares for bulleted lists are small. In live, the squares are big. In staging, the borders for tables are thick. In live, the borders are thin. In staging, an ASP generated image is the proper height. In live, the image is cropped at the bottom by about 10px. In the end, the layout on live became broken because of these tiny differences, but why? Does the fact that live is on IIS 6 and staging is on IIS 5 account for the small variance in display? And is there any way I can change this server side? Also, is there any reason why Firefox displays both correctly and IE displays both incorrectly?

    Read the article

  • What options do I have for creating customized keyboard shortcuts?

    - by Moses
    As I am getting more heavily into programming as a job and no longer as a hobby, I am definitely in need of some ways to improve my productivity. One thing that would definitely help in that respect is being able to create customized keyboard shortcuts for text/code snippets. For instance, holding down CMD+L+O+R+E+M will output a paragraph or two of the Lorem ipsum filler text, or CMD+F+U creates a function declaration. What I am ideally looking for is a database where I can store formatted text snippets, bind them to my choice of keystrokes, and then have the text paste whenever I perform the associated keystrokes. Are there any stand-alone applications that can do this for a Mac. Also, are there any text editors / IDEs that have this ability built in?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8  | Next Page >