Search Results

Search found 22139 results on 886 pages for 'security testing'.

Page 686/886 | < Previous Page | 682 683 684 685 686 687 688 689 690 691 692 693  | Next Page >

  • Process Rules!

    - by Ajay Khanna
    One of the key components of a process is “Business Rule”. Business rule takes many forms inside your process definition and in a way is a manifestation of your company’s business policy. Business rules inside the process are used for policy enforcement, governance, decision management, operations efficiency etc. Following are some basic types of rules that can be a part of your process. 1. Process conditions:  These are defined as the process gateways that determine a path process will take depending on the process parameters. For Example, if discount >10% go to approval path : if discount < 10% auto-approve order. 2. Data rules: These business rules are defined as facts in decision table or knowledge base. The process captures all required parameters and submits those to RETE based rules engine. Rules engine processes the data and returns the result back. For example, rules determining your insurance eligibility. 3. Event rules: Here the system is monitoring the various events and events patterns that are emerging inside the process or external to the process. You can define actions or alerts to be triggered when a certain pattern of events emerges over a specified time period. Such types of rules need Complex Event Processing and are used in applications like Credit Card Fraud detection or Utility Demand Response. 4. User Interface Rules: In order to add dynamic behavior to UI or to keep users from making mistakes and enforcing policy, another mechanism available is UI rules. They are evaluated as the end user is filling out the web forms. These may include enabling and disabling of UI as per business policy. An example could be, if the age of a user is less than 13 years, disable credit card field and enable parental approval required checkbox. Your process may include many of such rule types. Oracle OpenWorld provides a unique opportunity to listen to Oracle Business Process Management Experts and Customers.  We will discuss business rules during various sessions in Oracle OpenWorld. Two of the sessions specifically focused on business rules are listed below: Accelerating an Implementation of Complex Worldwide Business Approval Rules Wednesday, Oct 3, 10:15 AM Moscone South – 305 Oracle Business Rules Use Cases Design and Testing Wednesday, Oct 3, 3:30 PM Marriott Marquis - Golden Gate C3   Oracle Business Process Management Track covers a variety of topics, and speakers covering technology, methodology and best practices. You can see the list of Business process Management sessions here. Come back to this blog for more coverage from Oracle OpenWorld!

    Read the article

  • Organizing Git repositories with common nested sub-modules

    - by André Caron
    I'm a big fan of Git sub-modules. I like to be able to track a dependency along with its version, so that you can roll-back to a previous version of your project and have the corresponding version of the dependency to build safely and cleanly. Moreover, it's easier to release our libraries as open source projects as the history for libraries is separate from that of the applications that depend on them (and which are not going to be open sourced). I'm setting up workflow for multiple projects at work, and I was wondering how it would be if we took this approach a bit of an extreme instead of having a single monolithic project. I quickly realized there is a potential can of worms in really using sub-modules. Supposing a pair of applications: studio and player, and dependent libraries core, graph and network, where dependencies are as follows: core is standalone graph depends on core (sub-module at ./libs/core) network depdends on core (sub-module at ./libs/core) studio depends on graph and network (sub-modules at ./libs/graph and ./libs/network) player depends on graph and network (sub-modules at ./libs/graph and ./libs/network) Suppose that we're using CMake and that each of these projects has unit tests and all the works. Each project (including studio and player) must be able to be compiled standalone to perform code metrics, unit testing, etc. The thing is, a recursive git submodule fetch, then you get the following directory structure: studio/ studio/libs/ (sub-module depth: 1) studio/libs/graph/ studio/libs/graph/libs/ (sub-module depth: 2) studio/libs/graph/libs/core/ studio/libs/network/ studio/libs/network/libs/ (sub-module depth: 2) studio/libs/network/libs/core/ Notice that core is cloned twice in the studio project. Aside from this wasting disk space, I have a build system problem because I'm building core twice and I potentially get two different versions of core. Question How do I organize sub-modules so that I get the versioned dependency and standalone build without getting multiple copies of common nested sub-modules? Possible solution If the the library dependency is somewhat of a suggestion (i.e. in a "known to work with version X" or "only version X is officially supported" fashion) and potential dependent applications or libraries are responsible for building with whatever version they like, then I could imagine the following scenario: Have the build system for graph and network tell them where to find core (e.g. via a compiler include path). Define two build targets, "standalone" and "dependency", where "standalone" is based on "dependency" and adds the include path to point to the local core sub-module. Introduce an extra dependency: studio on core. Then, studio builds core, sets the include path to its own copy of the core sub-module, then builds graph and network in "dependency" mode. The resulting folder structure looks like: studio/ studio/libs/ (sub-module depth: 1) studio/libs/core/ studio/libs/graph/ studio/libs/graph/libs/ (empty folder, sub-modules not fetched) studio/libs/network/ studio/libs/network/libs/ (empty folder, sub-modules not fetched) However, this requires some build system magic (I'm pretty confident this can be done with CMake) and a bit of manual work on the part of version updates (updating graph might also require updating core and network to get a compatible version of core in all projects). Any thoughts on this?

    Read the article

  • Getting away from a customized Magento 1.4 installation - Magento 1.6, OpenCart, or others?

    - by Phil
    I'm dealing with a Magento 1.4.0.0 Community Edition installation with various undocumented changes to the core (mostly integration with an ERP system), an outdated Sweet Tooth Points & Rewards module and some custom payment providers. It also doubles as a mediocre blogging/CMS system. It has one store each for 3 different languages, with about 40 product categories for a few hundred products. [rant] With no prior experience with any PHP e-commerce systems, I find it very difficult to work with. I attempted to install Magento 1.4.0.0 on my local WAMP dev machine, it installs fine, but the main page or search do not show any products no matter what I do in the backend admin panel. I don't know what's wrong with it, and whatever information I googled is either too old or too new from Magento 1.4. Later I'm given FTP access to the testing server, which neither my manager or I have permission to install XDebug on, as apparantly it runs on the same server as the production server (yikes). Trying to learn how Magento works is torture. I spent a week trying to add some fields into the Onepage Checkout before giving up and went to work on something else. The template system, just like the rest of Magento, is a bloated mishmash of overcomplicated directory structures, weird config xml files and EAV databases. I went into 6 different models and several content blocks in the backend just to change what the front page looks like. With little-to-none helpful and clear documentation (unlike CodeIgniter) and various breaking changes between minor point revisions which makes it hard to find useful information, Magento 1.4 is a developer killer. [/rant] The client is planning to redesign the site and has decided it might as well as move on from this unsustainable, hacky, upgrade-unfriendly, developer-unfriendly mess. Magento 1.4 is starting to show its age, with Magento 1.7 coming soon, the client is considering upgrading to Magento 1.6 or 1.7 if it has improved from 1.4. The customizations done to the current Magento 1.4 installation will have to be redone, and a new license for the Sweet Tooth Points & Rewards module will have to be bought. The client is also open to other e-commerce systems. I've looked at OpenCart and it seems to be quite developer friendly with a fairly simple structure. I found some complaints regarding its performance when the shop has thousands of categories or products, but this is not an issue with the current number of products my client has. It seems to be solid ground for easy customization to bring the rewards system and ERP integration over. What should the client upgrade to in this case?

    Read the article

  • TDD - Red-Light-Green_Light:: A critical view

    - by Renso
    Subject: The concept of red-light-green-light for TDD/BDD style testing has been around since the dawn of time (well almost). Having written thousands of tests using this approach I find myself questioning the validity of the principle The issue: False positive or a valid test strategy that can be trusted? A critical view: I agree that the red-green-light concept has some validity, but who has ever written 2000 tests for a system that goes through a ton of chnages due to the organic nature fo the application and does not have to change, delete or restructure their existing tests? If you asnwer to the latter question is" "Yes I had a situation(s) where I had to refactor my code and it caused me to have to rewrite/change/delete my existing tests", read on, else press CTRL+ALT+Del :-) Once a test has been written, failed the test (red light), and then you comlpete your code and now get the green light for the last test, the test for that functionality is now in green light mode. It can never return to red light again as long as the test exists, even if the test itself is not changed, and only the code it tests is changed to fail the test. Why you ask? because the reason for the initial red-light when you created the test is not guaranteed to have triggered the initial red-light result for the same reasons it is now failing after a code change has been made. Furthermore, when the same test is changed to compile correctly in case of a compile-breaking code change, the green-light once again has been invalidated. Why? Because there is no guarantee that the test code fix is in the same green-light state as it was when it first ran successfully. To make matters worse, if you fix a compile-breaking test without going through the red-light-green-light test process, your test fix is essentially useless and very dangerous as it now provides you with a false-positive at best. Thinking your code has passed all tests and that it works correctly is far worse than not having any tests at all, well at least for that part of the system that the test-code represents. What to do? My recommendation is to delete the tests affected, and re-create them from scratch. I have to agree. Hard to do and justify if it has a significant impact on project deadlines. What do you think?

    Read the article

  • Credentials Not Passed From SharePoint WebPart to WCF Service

    - by Jacob L. Adams
    I have spent several hours trying to resolve this problem, so I wanted to share my findings in case someone else might have the same problem. I had a web part which was calling out to a WCF service on another server to get some data. The code I had was essentially using System.ServiceModel; using System.ServiceModel.Channels; ... var binding = new CustomBinding( new HttpTransportBindingElement { AuthenticationScheme = System.Net.AuthenticationSchemes.Negotiate } ); var endpoint = new EndpointAddress(new Uri("http://someotherserver/someotherservice.svc")); var someOtherService = new SomeOtherServiceClient(binding, endpoint); string result = someOtherService.SomeServiceMethod(); This code would run fine on my local instance of SharePoint 2010 (Windows 7 64-bit). However, when I would deploy it to the testing environment, I would get a yellow screen of death  with the following message: The HTTP request is unauthorized with client authentication scheme 'Negotiate'. The authentication header received from the server was 'Negotiate,NTLM'. I then went through the usual checklist of Windows Authentication problems: Check WCF bindings to make sure authentication is set correctly Check IIS to make sure Windows Authentication is enabled and anonymous authentication was disabled. Check to make sure the SharePoint server trusted the server hosting the WCF service Verify that the account that the IIS application pool is running under has access to the other server I then spend lot of time digging into really obscure IIS, machine.config, and trust settings (as well of lots of time on Google and StackOverflow). Eventually I stumbled upon a blog post by Todd Bleeker describing how to run code under the application pool identity. Wait, what? The code is not already running under application pool identity? Another quick Google search led me to an MSDN page that imply that SharePoint indeed does not run under the app pool credentials by default. Instead SPSecurity.RunWithElevatedPrivileges is needed to run code under the app pool identity. Therefore, changing my code to the following worked seamlessly using System.ServiceModel; using System.ServiceModel.Channels; using Microsoft.SharePoint; ... var binding = new CustomBinding( new HttpTransportBindingElement { AuthenticationScheme = System.Net.AuthenticationSchemes.Negotiate } ); var endpoint = new EndpointAddress(new Uri("http://someotherserver/someotherservice.svc")); var someOtherService = new SomeOtherServiceClient(binding, endpoint); string result; SPSecurity.RunWithElevatedPrivileges(()=> { result = someOtherService.SomeServiceMethod(); });

    Read the article

  • Recipient address rejected: User unknown in local recipient table;

    - by Thufir
    I've gone through the guide for mailman with some difficulty, but seem to be nearly there. I'm able to navigate to the mailman web GUI, create lists and subscribe. I just subscribe my local FQDN, so [email protected] for testing purposes. This FQDN only works on localhost. However, e-mails to the list address, in this case [email protected], are rejected: root@dur:~# root@dur:~# tail /var/log/mail.log Aug 28 08:28:43 dur postfix/master[12208]: terminating on signal 15 Aug 28 08:28:44 dur postfix/postfix-script[12322]: starting the Postfix mail system Aug 28 08:28:44 dur postfix/master[12323]: daemon started -- version 2.9.1, configuration /etc/postfix Aug 28 08:28:46 dur postfix/postfix-script[12332]: stopping the Postfix mail system Aug 28 08:28:46 dur postfix/master[12323]: terminating on signal 15 Aug 28 08:28:47 dur postfix/postfix-script[12437]: starting the Postfix mail system Aug 28 08:28:47 dur postfix/master[12438]: daemon started -- version 2.9.1, configuration /etc/postfix Aug 28 08:29:29 dur postfix/smtpd[12460]: connect from localhost[127.0.0.1] Aug 28 08:29:30 dur postfix/smtpd[12460]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 550 5.1.1 <[email protected]>: Recipient address rejected: User unknown in local recipient table; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<dur.bounceme.net> Aug 28 08:29:33 dur postfix/smtpd[12460]: disconnect from localhost[127.0.0.1] root@dur:~# root@dur:~# ll /var/lib/mailman/data/ total 56 drwxrwsr-x 2 root list 4096 Aug 28 08:28 ./ drwxrwsr-x 8 root list 4096 Aug 27 19:58 ../ -rw-r--r-- 1 root list 0 Aug 28 04:36 aliases -rw-r--r-- 1 root list 12288 Aug 28 04:36 aliases.db -rw-r--r-- 1 root list 12288 Aug 28 08:28 aliases.db.db -rw-r----- 1 root list 41 Aug 27 21:04 creator.pw -rw-rw-r-- 1 root list 10 Aug 27 19:58 last_mailman_version -rw-r--r-- 1 root list 14100 Oct 19 2011 sitelist.cfg root@dur:~# root@dur:~# grep alias /etc/postfix/main.cf alias_maps = hash:/etc/aliases, hash:/var/lib/mailman/data/aliases alias_database = hash:/var/lib/mailman/data/aliases.db #alias_database = hash:/etc/aliases root@dur:~# root@dur:~# postconf -n alias_database = hash:/var/lib/mailman/data/aliases.db alias_maps = hash:/etc/aliases, hash:/var/lib/mailman/data/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix default_transport = smtp home_mailbox = Maildir/ inet_interfaces = loopback-only mailbox_command = /usr/lib/dovecot/deliver -c /etc/dovecot/conf.d/01-mail-stack-delivery.conf -m "${EXTENSION}" mailbox_size_limit = 0 mailman_destination_recipient_limit = 1 mydestination = $myhostname localhost.$mydomain localhost $mydomain myhostname = dur.bounceme.net mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 readme_directory = no recipient_delimiter = + relay_domains = lists.example.com relay_transport = relay relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_path = private/dovecot-auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_tls_auth_only = yes smtpd_tls_cert_file = /etc/ssl/certs/ssl-mail.pem smtpd_tls_key_file = /etc/ssl/private/ssl-mail.key smtpd_tls_mandatory_ciphers = medium smtpd_tls_mandatory_protocols = SSLv3, TLSv1 smtpd_tls_received_header = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport root@dur:~# Why is this e-mail rejected? It seems to, maybe be related to the alias_maps and alias_database settings in postfix.

    Read the article

  • Stop YouTube Videos from Automatically Playing in Chrome

    - by The Geek
    If you’ve actually used the internet before, you’ve probably come across a page with an auto-playing YouTube clip, and chances are good it was a rather annoying one. Here’s how to stop them from starting automatically in Chrome. We’ve already told you how to stop them from automatically playing if you’re a Firefox user (best answer: use Flashblock!), but now it’s time for Chrome users to get their turn. Use the Stop Autoplay for YouTube Extension The great thing about this extension is that it stops the video from playing, but it allows it to continue buffering, so when you do feel like playing the video, it’ll already be downloaded—really useful for people with slower internet connections. There’s no UI or anything fancy, just head to the extension page and click the Install button. If you want to get rid of it later, use the Tools –> Extensions menu (or you can type chrome://extensions/ into your address bar), and then click the Uninstall link for that add-on.   Download Stop Autoplay for YouTube [Google Chrome Extensions] Using FlashBlock for Chrome If you really wanted to, you could just disable Flash across the board using FlashBlock for Chrome. Once you’ve installed the extension, you won’t see any Flash elements anywhere, and you’ll have to move your mouse over them and click to enable them each time. When I installed the extension the first time, I noticed that YouTube was already in the allow list. I’m not sure if that’s the default setting or not, but you can use the icon in the address bar, or the Options from the Extensions panel to get to the settings page, and from there you can remove anything from the White List that you wouldn’t want. Another nice feature about Flash Block is that it can also block Silverlight, or you could simply uninstall or remove unnecessary Chrome plug-ins. Download FlashBlock for Chrome Similar Articles Productive Geek Tips Stop YouTube Videos from Automatically Playing in FirefoxDisable YouTube Comments while using ChromeApologies About An Awful Audio AdvertisementImprove YouTube Video Viewing in Google ChromeWatch YouTube Videos in Cinema Style in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites

    Read the article

  • Enhance Your Gmail Account in Chrome

    - by Asian Angel
    Are you tired of items like the Chat and Invite Boxes cluttering up your Gmail account? Then join us as we look at the Better Gmail extension for Google Chrome. Before Here are some examples of items that you may be tired of looking at in your Gmail account such as the “Footer” below your “Inbox”, the “Chat Box”, and the “Invitation Box”. Perhaps you would also like to have the “New Window, Print all, & Create a document Commands” moved elsewhere. And of course there is everyone’s “favorite” sponsored links… Time to do some cleaning up and reorganizing. Better Gmail in Action As soon as you have installed Better Gmail a new tab will automatically open and present you with the available options. Place a “checkmark” in the box for each option that you would like activated and click on “Save” when finished. Note: The final option entry is a tie-in with two other “linked” extensions (Folders4Gmail & HTML Signature) while the middle listing is a link to an article for disabling Google Buzz. Once you have saved your changes in the “Options” you will be prompted to refresh your Gmail tab to see the changes. Going back to our “Inbox Area” everything looks so much more streamlined and clean now. Goodbye clutter! The “New Window, Print all, & Create a document Commands” definitely look a lot nicer as a small toolbar above our e-mail. And the right side…you can see for yourself just how much better that looks. No more distractions there to bother you as you read your e-mail. Conclusion If you have been wanting to get rid of the undesirable elements visible in your Gmail account then hurry over to the Better Gmail page, grab the extension and enjoy the better view. Links Download the Better Gmail extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Figure out which Online accounts are selling your email to spammersAdd a Remember The Milk Task Pane to Gmail in ChromeHow to Send and Receive Hotmail from Your Gmail AccountAdd Your Gmail To Windows Live MailOpen Your Gmail Account in a Popup Window TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Windows Media Player 12: Tweak Video & Sound with Playback Enhancements Own a cell phone, or does a cell phone own you? Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet

    Read the article

  • How To Disable Individual Plug-ins in Google Chrome

    - by The Geek
    Have you ever wondered how to disable useless or insecure browser plug-ins in Google Chrome? Here’s the lowdown on how to get rid of Java, Acrobat, Silverlight, and the rest of the plugins you probably want to get rid of. Disabling Plugins in Google Chrome If you head to about:plugins in your address bar, you’ll probably see a list of plugins, but won’t be able to disable them yet. What you’ll need to do is switch over to the Dev channel of Chrome, which gives you access to all the latest features—though you might be warned that sometimes the dev channel might be less stable than the release or beta channels. Ready to proceed? Head to the Dev Channel page, and then click the link to run the installer. You’ll be prompted to restart Chrome when you’re done. Note that Mac and Windows users can both run an installer to switch. Linux users will have to install a package. Note: Once you’ve switched to the Dev channel, you can’t really switch to the stable channel. You’ll have to uninstall Chrome and then reinstall the regular version. Now that you’ve switched to the dev channel and restarted your browser, head to about:plugins in the address bar, and then just disable each plugin you really don’t need. Plugins you can generally live without?  Java, Acrobat, Microsoft Office, Windows Presentation Foundation, Silverlight. These will be on a case-by-case basis, of course, but the vast majority of large websites don’t require any of those. When it comes right down to it, the only plugin that most people require is Flash… and leave the “Default Plug-in” alone too. Special thanks to @jordanconway for pointing out the solution. Similar Articles Productive Geek Tips Disable YouTube Comments while using ChromeHow to Make Google Chrome Your Default BrowserSubscribe to RSS Feeds in Chrome with a Single ClickAdd Notes to Google Notebook from ChromeAccess Google Chrome’s Special Pages the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi LocPDF is a Visual PDF Search Tool Download Free iPad Wallpapers at iPad Decor Get Your Delicious Bookmarks In Firefox’s Awesome Bar

    Read the article

  • Preview Links and Images in Google Chrome

    - by Asian Angel
    Anyone who has used the CoolPreviews extension in Firefox knows how wonderful that preview window can be. Now you can get the same kind of functionality in Chrome with the ezLinkPreview extension. Note: Extension will not work on websites containing “frame buster” code (navigation to the actual URL will occur). Before Normally if you want to have a better look at a particular webpage the only option you have is to go ahead and open it in a new tab or window. But it would certainly be nice to be able to take a quick “sneak peek” before-hand… After As soon as you have finished installing the extension everything is ready to go…just refresh any pages open prior to installation and enjoy the preview goodness. When you hover your mouse near any link you will notice a small “Preview Button” appear with the letters “EZ” inside. A closer look at the “Preview Button”. Click on the “Preview Button” to open the popup window. Now you can get a very good idea of whether the page is worth visiting or not. Here is a closer look at the popup window. Notice that you can see the URL for the webpage and access a convenient set of buttons on the right side (Open to new tab, Pin to keep overlay open, and Close). You can even resize the window as desired to best suit your needs (you can actually grab any of the four corners to resize the popup window). It is also possible to open a “preview window” inside the popup window…you can see the “Preview Button” here… If you have Chrome maximized you can enjoy using a large sized “preview window”. Now that is nice! For those who may be curious you can see that ezLinkPreview works nicely with images too. Conclusion The ezLinkPreview extension provides a quick and simple way to preview links and/or images while you are browsing. If you are looking for similar functionality in Firefox then be sure to read our article on CoolPreviews here. Links Download the ezLinkPreview extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Google Image Search Quick FixSubscribe to RSS Feeds in Chrome with a Single ClickFind a Website’s Actual Location with Chrome FlagsHow to Make Google Chrome Your Default BrowserEnable Auto-Paging Goodness in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job?

    Read the article

  • Databases and Beer

    - by Johnm
    It is a bit of a no-brainer: Include the word "beer" in a subject line of an e-mail or blog post title and you can be certain that it will be read. While there are times this practice might be a ploy to increase readership, it is not the case for this blog post. There is inspiration that can be drawn from other industries to which we, as database professionals, can apply in our industry. In this post I will highlight one of my favorite participants of the brewing industry. The Boston Beer Company started in the 1970s in Boston, Massachusetts. Others may be more familiar with this company through their Samuel Adams Boston Lager and other various seasonal beers. I am continually inspired by their commitment to mastery of the brewing process to which they evangelize frequently in their commercials. They also are continually in pursuit of pushing the boundaries of beer as we know it while working within traditional constraints. A recent example of this is their collaboration with Weihenstephan Brewery of Munich, Germany to produce the soon to be released Infinium beer. This beer, while brewed as an ale, is touted as something closer to something like Champaign - all while complying with the Reinheitsgebot. The Reinheitsgebot is also known as the "German Beer Purity Law" which was originated in 1516. This law states that beer is to consist of water, barley, hops and yeast. That's it. Quite a limiting constraint indeed. and yet, The Boston Beer Company pushed forward. Much like the process of brewing, the discipline of database design and architecture is one that is continually in process and driven by the pursuit of mastery. While we do not have purity laws to constrain us, we have many other types: best practices, company policies, government regulations, security and budgets. Through our fellow comrades, we discuss the challenges and constraints in which we operate. We boil down the principles and theories that define our profession. We reassemble these into something that is complementary to the business needs that we must fulfill. As a result, it is not uncommon to see something amazingly innovative in a small business who is pushing the boundaries of their database well beyond its intended state. It is equally common to see innovation in the use of features available in the more advanced features of databases that are found in large businesses. The tag line for The Boston Beer Company is: "Take Pride In Your Beer.", I would like to offer an alternative and say "Take Pride In Your Database." So, As you pour your next Boston Lager into a frosted glass, consider those who spend their lives mastering the craft of brewing and strive to interject their spirit into everything that you do as a database professional. Cheers!

    Read the article

  • Selectively Exposing Functionallity in .Net

    - by David V. Corbin
    Any developer should be aware of the principles of encapsulation, cross-tier isolation, and cross-functional separation of concerns. However, it seems the few take the time to consider the adage of "minimal yet complete"1 when developing the software. Consider the exposure of "business objects" to the user interface. Some common situations occur: Accessing a given element requires a compound set of calls that do not "make sense" to the User Interface. More information than absolutely required is exposed to the user interface It would be much cleaner if a custom interface was provided that exposed exactly (and only) the information that is required by the consumer. Achieving this using conventional techniques would require the creation (and maintenance!) of custom classes to filter and transpose the information into the ideal format. Determining the ROI on this approach can be very difficult to ascertain, and as a result it is often ignored completely. There is another approach, which is largely made practical by virtual of the Action and Func delegates. From a callers point of view, the following two samples can be used interchangeably:     interface ISomeInterface     {         void SampleMethod1(string param);         string SamepleMethod2(string param);     }       class ISomeInterface     {         public Action<string> SampleMethod1 {get; }         public Func<string,string> SamepleMethod2 {get; }     }   The capabilities this simple changes enable are significant (and remember it does not cange the syntax at the call site): The delegates can be initialized to directly call the proper method of any target class. The delegates can be dynamically updated based on the current state. The "interface" can NOT be cast to the concrete class (which often exposes more functionallity). This patterns By limiting the interface to the exact functionallity required, the reduced surface area will typically result in lower development, testing and maintenance costs. We are currently in the process of posting a project on CodePlex which illustrates this (and many other) techniques which have proven helpful in creating robust yet flexible solutions that are highly efficient2 and maintainable. This post will be updated as soon as the project is published. 1) Credit: Scott  Meyers, Effective C++, Addison-Wesley 1992 2) For those who read my previous post on performance it should be noted that the use of delegates is on the same order of magnitude (actually a tiny amount faster) as conventional interfaces.

    Read the article

  • ATI (fglrx) Dual monitor / laptop hot-plugging

    - by Brendan Piater
    I feel like I've gone back 5 years on my desktop today. I'll try not dump to much frustration here... I been running 12.04 since alpha with the ATI open source drivers and the gnome 3 desktop. I been generally very happy with them with only small issues along the way. Now of course it does not support 3D acceleration 100%, so games like my newly purchased Amnesia from the Humble bundle would not play. OK, no worries, the ATI driver is in the repos so let me have a go I thought. With all this testing that's been done with multi-monitor support, what could go wrong...? How I use my computer: It's laptop, with a HD 3670 card in it. I spend about 50% of the time working directly on the laptop (at home) and about 50% of the time working with an additional display connected (at work), multi desktop environment. What happening now: installed drivers things seemed to working, save some small other bugs (not critical) this morning I take my machine and plug the additional monitor into it, and nothing happens... ok fine. open "displays" try configure dual display, won't work open ati config "thing" (cause it is a thing, a crap thing) and set-up monitors there reboot it says (oh ffs, really.... ok) reboot, login and wow, I got a gnome 2 desktop (presume gnome 3 fall back) and no multi-monitor...great. (screenshot: http://ubuntuone.com/5tFe3QNFsTSIGvUSVLsyL7 ) after getting into a situation where I had to Ctrl + Alt + Del to get out of a frozen display, I eventually manage to set-up a single display desktop on the "main" monitor ok.. time to go home... unplug monitor... nothing happens.. oh boy here we go... try displays again, nothing, just hangs the display.. great. crash all the apps and reboot... So it's been a trying day... What I really hope is that someone else has figured out how to avoid this PAIN. Please help with a solution that: allows me run fglrx (so I can run the games I want) allows me to hot-plug a monitor to my laptop and remove it again allows me to change the display so include the hot-plugged monitor (preferable automatically like it did with the open drivers) Next best if that's not possible: switch between laptop only display and monitor only display easily (i.e. not having to reboot/logout/suspened etc) Really appreciate the time of anyone that has a solution. Thanks in advance. Regards Brendan PS: I guess I should file a bug about this too, so some direction as to the best place to file this would be appreciated too.

    Read the article

  • VirtualBox image SOA Suite &amp; BPM Suite 11.1.1.6.0 & Your feedback?

    - by JuergenKress
    The integration PM team is very pleased to announce the release of a new version of our pre-configured SOA/BPM VirtualBox image for testing and evaluation. This VirtualBox appliance contains a fully configured, ready-to-use SOA/BPM/Webcenter 11.1.1.6.0 installation. All you need is to install Oracle VM VirtualBox on your desktop/laptop and import the SOA/BPM appliance and you are ready to try out SOA Suite and BPM Suite -- no installation and configuration required! The following software is installed in this VritualBox image: Oracle Enterprise Linux (64-bit) EL 5 Update 5 Oracle XE Database 11.2.0 Oracle SOA Suite 11.1.1.6.0 (includes Service Bus) Oracle BPM Suite 11.1.1.6.0 Oracle Webcenter Content (Enterprise Content Management) 11.1.1.6.0 Oracle Webcenter Suite 11.1.1.6.0 Oracle JDeveloper 11.1.1.6.0 JRockit R28.2.0-79-146777-1.6.0_29s Sun Java SDK 1.6.0_29-b11 If you want to try it out, please go to the Pre-built Virtual Machine for SOA Suite and BPM Suite 11g OTN page for detailed instructions on downloading and importing the VirtualBox image. Jon Petter Hjulstad published the first impression at his blog Twitter & LinkedIn We have been waiting for the new VirtualBox Image for a long time, and finally it is here. The appliance has improved in many ways since last release, so it has been worth waiting for. Both the appliance itself and the documentation is excellent. It is evident that Oracle has listened to feedback on the previous release, and I think the developer VMs are useful. Especially the adoption of new patchsets and versions (ex when 12c will be available) will gain a lot from quick getting hands-on experiences. This VirtualBox appliance is a multipurpose image which can be used in different domain configurations. The image has a number of pre-configured domains that you can use depending on your need. The image can be set up so that it requires use of as few resources as possible, you can for instance easily disable B2B if you do not need it, or you can shut down the desktop console and save 600MB. It is important to say that this image is not for production purposes. Read the full article SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix ForumTechnorati Tags: SOA Suite Image,VirtualBox,BPM suite Image,SOA Specialization award,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • Swiss Re increases data warehouse performance and deploys in record time

    - by KLaker
    Great information on yet another data warehouse deployment on Exadata. A little background on Swiss Re: In 2002, Swiss Re established a data warehouse for its client markets and products to gather reinsurance information across all organizational units into an integrated structure. The data warehouse provided the basis for reporting at the group level with drill-down capability to individual contracts, while facilitating application integration and data exchange by using common data standards. Initially focusing on property and casualty reinsurance information only, it now includes life and health reinsurance, insurance, and nonlife insurance information. Key highlights of the benefits that Swiss Re achieved by using Exadata: Reduced the time to feed the data warehouse and generate data marts by 58% Reduced average runtime by 24% for standard reports comfortably loading two data warehouse refreshes per day with incremental feeds Freed up technical experts by significantly minimizing time spent on tuning activities Most importantly this was one of the fastest project deployments in Swiss Re's history. They went from installation to production in just four months! What is truly surprising is the that it only took two weeks between power-on to testing the machine with full data volumes! Business teams at Swiss Re are now able to fully exploit up-to-date analytics across property, casualty, life, health insurance, and reinsurance lines to identify successful products. These points are highlighted in the following quotes from Dr. Stephan Gutzwiller, Head of Data Warehouse Services at Swiss Re:  "We were operating a complete Oracle stack, including servers, storage area network, operating systems, and databases that was well optimized and delivered very good performance over an extended period of time. When a hardware replacement was scheduled for 2012, Oracle Exadata was a natural choice—and the performance increase was impressive. It enabled us to deliver analytics to our internal customers faster, without hiring more IT staff" “The high quality data that is readily available with Oracle Exadata gives us the insight and agility we need to cater to client needs. We also can continue re-engineering to keep up with the increasing demand without having to grow the organization. This combination creates excellent business value.” Our full press release is available here: http://www.oracle.com/us/corporate/customers/customersearch/swiss-re-1-exadata-ss-2050409.html. If you want more information about how Exadata can increase the performance of your data warehouse visit our home page: http://www.oracle.com/us/products/database/exadata-database-machine/overview/index.html

    Read the article

  • Essbase 11.1.2 - AgtSvrConnections Essbase Configuration Setting

    - by Ann Donahue
    AgtSvrConnections is a documented Essbase configuration setting used in conjunction with the AgentThreads and ServerThreads settings. Basically, when a user logs into Essbase, the AgentThreads connects to the ESSBASE process then the AgtSvrConnections will connect the ESSBASE process to the ESSSVR application process which then the ServerThreads are used for end user activities. In Essbase 11.1.2, the default value of the AgtSvrConnections setting was changed to 5. In previous Essbase releases, the AgtSvrConnections setting default value is 1. It is recommended that tuning the AgtSvrConnections settings be done incrementally by 1 or 2 maximum and based on the number of concurrent Set Active/Clear Active calls. In the Essbase DBA Guide and Technical Reference, the maximum setting recommended is to not exceed what is set for AgentThreads, however, we have found that most customers do not need to exceed a setting of 10. In general, it is ok to set AgtSvrConnections close to the AgentThreads setting, however, there have been customers that needed an AgentThread setting greater than 10 and we have found that the AgtSvrConnections setting higher than 5-10 could have a negative impact on Essbase due to too many TCP ports used unnecessarily. As with all Essbase.cfg settings, it is best to set values to what is needed based on process load and not arbitrarily set to high values. In order to monitor and tune the AgtSvrConnections setting, monitor the application log for logins and Set Active/Clear Active messages. If there are a lot of logins and Set Active/Clear Active messages happening in a short period of time making it appear that the login is taking longer, incrementally increase the AgtSvrConnections setting by 1 or 2, which can then help with login speed. The login performance tolerance is different from one customer environment to another since there are other factors that can impact this performance i.e. network latency. What is happening in Essbase when a user logs in: ESSBASE issues a Set Active to the ESSSVR process. Each application has its own ESSSVR process. Set Active then calls MultipleAsyncLogout and waits on the pipe connection. MultipleAsyncLogout goes back to ESSBASE. ESSBASE then needs to send the logout back to the ESSSVR process. When the AgtSvrConnections setting needs to be increased from the default of 5, it is because Essbase cannot find a connection since the previous connections are used by ESSBASE-ESSSVR. In this example, we may want to increase AgtSvrConnections from 5 to 7 to improve the login performance. Again, it is best to set Essbase settings to what is needed based on process load and not arbitrarily set to high values. In general, stress or performance testing environments using automated tools may need higher than normal settings. This is because automated processes run at high speeds for logging in and logging out. Typically, in a real life production environment, the settings are much closer to default values.

    Read the article

  • Are SQL Injection vulnerabilities in a PHP application acceptable if mod_security is enabled?

    - by Austin Smith
    I've been asked to audit a PHP application. No framework, no router, no model. Pure PHP. Few shared functions. HTML, CSS, and JS all mixed together. I've discovered numerous places where SQL injection would be easily possible. There are other problems with the application (XSS vulnerabilities, rampant inline CSS, code copy-pasted everywhere) but this is the biggest. Sometimes they escape inputs, not using a prepared query or even mysql_real_escape_string(), mind you, but using addslashes(). Often, though, their queries look exactly like this (pasted from their code but with columns and variable names changed): $user = mysql_query("select * from profile where profile_id='".$_REQUEST["profile_id"]."'"); The developers in question claimed that they were unable to hack their application. I tried, and found mod_security to be enabled, resulting in HTTP 406 for some obvious SQL injection attacks. I believe there to be sophisticated workarounds for mod_security, but I don't have time to chase them down. They claim that this is a "conceptual" matter and not a "practical" one since the application can't easily be hacked. Their internal auditor agreed that there were problems, but emphasized the conceptual nature of the issues. They also use this conceptual/practical argument to defend against inline CSS and JS, absence of code organization, XSS vulnerabilities, and massive amounts of repetition. My client (rightly so, perhaps) just wants this to go away so they can launch their product. The site works. You can log in, do what you need to do, and things are visibly functional, if slow. SQL Injection would indeed be hard to do, given mod_security. Further, their talk of "conceptual vs. practical" is rhetorically brilliant, considering that my client doesn't understand web application security. I worry that they've succeeded in making me sound like an angry puritan. In many ways, this is a problem of politics, not technology, but I am at a loss. As a developer, I want to tell them to toss the whole project and start over with a new team, but I face a strong defense from the team that built it and a client who really needs to ship their product. Is my position here too harsh? Even if they fix the SQL Injection and XSS problems can I ever endorse the release of an unmaintainable tangle of spaghetti code?

    Read the article

  • SQL SERVER – Get 2 of My Books FREE at Koenig Tech Day – Where Technologies Converge!

    - by pinaldave
    As a regular reader of my blog – you must be aware of that I love to write books and talk about various subjects of my book. The founders of Koenig Solutions are my very old friends, I know them for many years. They have been my biggest supporter of my books. Coming weekend they have a technology event at their Bangalore Location. Every attendee of the technology event will get a set of two books worth Rs. 450 – ‘SQL Server Interview Questions And Answers‘ and ‘SQL Wait Stats Joes 2 Pros‘. I am going to cover a couple of topics of the books and present  as well. I am very confident that every attendee will be having a great time. I will be covering following subjects: SQL Server Tricks and Tips for Blazing Fast Performance Slow Running Queries (SQL) are the most common problem that developers face while working with SQL Server. While it is easy to blame the SQL Server for unsatisfactory performance, however the issue often persists with the way queries have been written, and how SQL Server has been set up. The session will focus on the ways of identifying problems that slow down SQL Servers, and tricks to fix them. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. After the session is over – I will point to what exact location in the book where you can continue for the further learning. I am pretty excited, this is more like book reading but in entire different format. The one day event will cover four technologies in four separate interactive sessions on: Microsoft SQL Server Security VMware/Virtualization ASP.NET MVC Date of the event: Dec 15, 2012 9 AM to 6PM. Location of the event:  Koenig Solutions Ltd. # 47, 4th Block, 100 feet Road, 3rd Floor, Opp to Shanthi Sagar, Koramangala, Bangalore- 560034 Mobile : 09008096122 Office : 080- 41127140 Organizers have informed me that there are very limited seats for this event and technical session based on my book will start at Sharp 9 AM. If you show up late there are chances that you will not get any seats. Registration for the event is a MUST. Please visit this link for further information. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology

    Read the article

  • Oracle Tutor: *** CAUTION to Word .docx Users ***

    - by [email protected]
    Microsoft released a security update KB969604 for Office 2007 (around June 2009) This update causes document variables within Word docx files to be scrambled. This update might still be pushed out via Office 2007 updates DO NOT save files as docx using MS OFFICE 2007 until you apply the MS hotfix # 970942 available here If you are using Windows XP with Office 2003 or Office 2000 and have installed an older Office 2007 compatibility pack, documents saved as docx may also cause the scrambled document variables. Installing the 2007 compatibility pack published on 1/6/2010 (version 4) will prevent the document variables from becoming corrupt. Those on Windows 2000 may not be able to install the latest compatibility pack, or the compatibility pack may not function properly. This situation will hopefully be rectified in the coming months. What is a document variable? Document variables store data inside the document, invisible to the user. The Tutor software uses them when converting the document to HTML and when creating the flowchart, just to name a couple of uses. How will you know if a document's variables are scrambled? The difficulty in diagnosing the issue is that the symptoms can take myriad forms. There isn't a single error message or a single feature that one can point to and say, "test for the problem by doing this." The best clue about the error is seeing any kind of string in an error message that has garbage characters, question marks, xml code snippets, or just nonsense. Such as "Language ?????????????xlr;lwlerkjl could not be found." It is also possible to see the corrupted data in the footers of the Word docs. And, just because the footers look correct does not mean that the document variables are not corrupted. The corruption problem does not occur in every document variable in the document, just some of them. Often it is less than a quarter of them. What is the difference between docx files and doc files? Office 2007 uses Office Open XML formats with .docx and .docm filename extensions. - Docx is an Office Open XML word document. - Docm is a macro enabled Office Open XML document. This means the file structure behind the scenes is quite different from the binary file formats used prior to Office 2007 such as .doc, .dot, .xls, and .ppt. Solution Summary: For Windows XP and Word 2007: Install the hotfix, or save files as *.doc For Windows XP and Word 2000 and 2003: Install the latest compatibility pack or save files as *.doc For Windows 2000 with Word 2000 or 2003, do not use any compatibility pack, save files as *.doc Emily Chorba Principle Product Manager for Oracle Tutor

    Read the article

  • Setting up Ubuntu Server as a Router with DHCPD and 3 Ethernet devices

    - by cengbrecht
    My configuration: Ubuntu 12.04 DHCP3-server eth0, eth1, eth2 Edit: removed br0&br1 eth0 is the external connection eth1 & eth2 are the internal network eth1 and eth2 are supposed to be seperate networks of student/teachers respectivly. What I would like to have is the internet from external device bridged to device 1 and 2, with the DHCP server controlling the two internal devices. Its already working with DHCP, the part I am stuck on is bridging for internet. I have setup a script that I found here: Router With the original script he linked here: Ubuntu Router Guide echo -e "\n\nLoading simple rc.firewall-iptables version $FWVER..\n" IPTABLES=/sbin/iptables #IPTABLES=/usr/local/sbin/iptables DEPMOD=/sbin/depmod MODPROBE=/sbin/modprobe EXTIF="eth0" INTIF="eth1" INTIF2="eth2" echo " External Interface: $EXTIF" echo " Internal Interface: $INTIF" echo " Internal Interface: $INTIF2" EXTIP=`ifconfig $EXTIF | grep 'inet addr:' | sed 's#.*inet addr\:\([0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\).*#\1#g'` echo " External IP: $EXTIP" #====================================================================== #== No editing beyond this line is required for initial MASQ testing == The rest of the script below this is as is. I can get ip from the eth1 & eth2 devices, and my computer can see them, and them it, however, internet is not being passed through. If you need more information please just let me know. EDIT: So I had a 255.255.254.0 network, I believe that was causing the issue. Not sure if it will matter on the second card, I will test later. After changing the subnet to 255.255.255.0 the pings will pass through, however, I cannot get DNS requests to pass? My new Config for Firewall Rules # /etc/iptables.up.rules # Generated by iptables-save v1.4.12 on Wed Nov 28 19:43:28 2012 *mangle :PREROUTING ACCEPT [39:4283] :INPUT ACCEPT [39:4283] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [12:4884] :POSTROUTING ACCEPT [13:5145] COMMIT # Completed on Wed Nov 28 19:43:28 2012 # Generated by iptables-save v1.4.12 on Wed Nov 28 19:43:28 2012 *filter :FORWARD ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A FORWARD -j LOG -A FORWARD -m state -i eth1 -o eth0 --state NEW,ESTABLISHED,RELATED -j ACCEPT -A FORWARD -m state -i eth2 -o eth0 --state NEW,ESTABLISHED,RELATED -j ACCEPT -A FORWARD -m state -i eth0 -o eth1 --state NEW,ESTABLISHED,RELATED -j ACCEPT -A FORWARD -m state -i eth0 -o eth2 --state NEW,ESTABLISHED,RELATED -j ACCEPT COMMIT # Completed on Wed Nov 28 19:43:28 2012 # Generated by iptables-save v1.4.12 on Wed Nov 28 19:43:28 2012 *nat :INPUT ACCEPT [0:0] :PREROUTING ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] -A POSTROUTING -o eth0 -j MASQUERADE -A POSTROUTING -o eth0 -j SNAT --to-source 192.168.1.25 COMMIT # Completed on Wed Nov 28 19:43:28 2012 Not sure what else you may need, but I am using Webmin to control the server(Needed for the operators on site to know how to use it.) If you could explain it as standard CLI commands, or edits to this file directly then we should be ok. :) And thanks again Erik, I do believe your edits did help.

    Read the article

  • ArchBeat Link-o-Rama for November 8, 2012

    - by Bob Rhubart
    Webcast: Meeting Customer Expectations in the New Age of Retail Keep your eye on this live webcast as Sanjeev Sharma (Principal Product Director, Oracle Exalogic), Kelly Goetsch (Senior Principal Product Manager, Oracle Commerce), and Dan Conway (Senior Product Manager, Oracle Retail) offer real-world examples of business value derived by running customer-facing applications on Oracle Engineered Systems. Live, Thursday Nov 8, 10am PT/ 1pm ET. Solving Big Problems in Our 21st Century Information Society | Irving Wladawsky-Berger "I believe that the kind of extensive collaboration between the private sector, academia and government represented by the Internet revolution will be the way we will generally tackle big problems in the 21st century. Just as with the Internet, governments have a major role to play as the catalyst for many of the big projects that the private sector will then take forward and exploit. The need for high bandwidth, robust national broadband infrastructures is but one such example." — Irving Wladawsky-Berger SOA Still Not Dead: Ratification of Governance Standard Highlights SOA’s Continued Relevance So just about the time I dig into Google Trends to learn that the conversation about governance peaked in 2004, along comes all this InfoQ article by Richard Seroter. And of course you've already listened to the OTN Archbeat Podcast about governance, right? Right? Implications of Java 6 End of Public Updates for Oracle E-Business Suite Users | Steven Chan The short version is: "Nothing will change for EBS users after February 2013." According to Steven Chan, "EBS users will continue to receive critical bug fixes and security fixes as well as general maintenance for Java SE 6." You'll find additional information on Steven's blog. ADF Mobile Custom Javascript – iFrame Injection | John Brunswick The ADF Mobile Framework provides a range of out of the box components to add within your AMX pages, according to John Brunswick. But what happens when "an out of the box component does not directly fulfill your development need? What options are available to extend your application interface?" John has an answer. How Data and BPM are married to get the right information to the right people at the right time | Leon Smiers "Business Process Management…supports a large group of stakeholders within an organization, all with different needs," says Oracle ACE Leon Smiers. "End-to-end processes typically run across departments, stakeholders and applications, and can often have a long life-span. So how do organizations provide all stakeholders with the information they need?" Leon provides answers in this post. Thought for the Day "(When) asking skilled architects…what they do when confronted with highly complex problems…(they) would most likely answer, 'Just use Common Sense.' (A) better expression than 'common sense' is 'contextual sense' — a knowledge of what is reasonable within a given content. Practicing architects through eduction, experience and examples accumulate a considerable body of contextual sense by the time they're entrusted with solving a system-level problem…" — Eberhardt Rechtin (January 16, 1926 – April 14, 2006) Source: SoftwareQuotes.com

    Read the article

  • What do you think of the EntLib 5.0 configuration tool?

    Hello again! Its been a while, I know. Ive been busy over the last few months with several projects, some of them software related, and one of them human my son Jesse was born on 26 February 2010. Fun times! Meanwhile, back in Redmond, the p&p team has been busy working on Enterprise Library 5.0 see Grigoris announcement for details on the beta. Theres a ton of new stuff in this release, but theres one big new feature that hasnt received a lot of attention that Im keen to hear your perspectives on. The change is the biggest overhaul to the configuration tool since Enterprise Library was launched. If you havent yet grabbed the EntLib 5.0 beta, heres a before and after shot of the config tool: Enterprise Library 4.1 config tool Enterprise Library 5.0 (beta 1) config tool The tool has been rebuilt from the ground up in response to some feedback and usability studies from the previous version of the tool. But is this a step in the right direction? Id love to hear what you think. If youve downloaded EntLib 5.0 and tried out the tool, please share your thoughts on: First impressions. Is the tool easy to understand? Easy to find what youre looking for? Easy to read existing configuration? Pretty? Ease of use for real life tasks. Rather than make up your own tasks, here are a few sample scenarios you might want to try: Configure the data access block with a SQL Server connection called Audit that points to a database called Audit on a server called DB Configure the logging block so that any log entries in the Audit category are written to both the Event Log and the Audit database (see above) Configure the validation block with a ruleset called Email Address that uses an appropriate regular expression for e-mail addresses Configure the policy injection block such that any calls to classes in the MyCompany.Security namespace are logged before and after the call using the Audit category (see above) Comparison with the old config tool. What do you like better in the new tool? What did you like better in the old tool? How do you rate your level of expertise using the old tool? Keep in mind that I no longer work in the p&p team, so I cant say how any of this feedback will be used (although Im sure the team is listening!). However since Ive invested so much time in Enterprise Library, both in leading the team and using the product on real projects Im very interested to hear what you all think of the tools new direction.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How do you exclude yourself from Google Analytics on your website using cookies?

    - by Cold Hawaiian
    I'm trying to set up an exclusion filter with a browser cookie, so that my own visits to my don't show up in my Google Analytics. I tried 3 different methods and none of them have worked so far. I would like help understanding what I am doing wrong and how I can fix this. Method 1 First, I tried following Google's instructions, http://www.google.com/support/analytics/bin/answer.py?hl=en&answer=55481, for excluding traffic by Cookie Content: Create a new page on your domain, containing the following code: <body onLoad="javascript:pageTracker._setVar('test_value');"> Method 2 Next, when that didn't work, I googled around and found this Google thread, http://www.google.com/support/forum/p/Google%20Analytics/thread?tid=4741f1499823fcd5&hl=en, where the most popular answer says to use a slightly different code: SHS Analytics wrote: <body onLoad="javascript:_gaq.push(['_setVar','test_value']);"> Thank you! This has now set a __utmv cookie containing "test_value", whereas the original: pageTracker._setVar('test_value') (which Google is still recommending) did not manage to do that for me (in Mac Safari 5 and Firefox 3.6.8). So I tried this code, but it didn't work for me. Method 3 Finally, I searched StackOverflow and came across this thread, http://stackoverflow.com/questions/3495270/exclude-my-traffic-from-google-analytics-using-cookie-with-subdomain, which suggests that the following code might work: <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setVar', 'exclude_me']); _gaq.push(['_setAccount', 'UA-xxxxxxxx-x']); _gaq.push(['_trackPageview']); // etc... </script> This script appeared in the head element in the example, instead of in the onload event of the body like in the previous 2 examples. So I tried this too, but still had no luck with trying to exclude myself from Google Analytics. Re-iterate question So, I tried all 3 methods above with no success. Am I doing something wrong? How can I exclude myself from my Google Analytics using an exclusion cookie for my browser? Update I've been testing this for several days now, and I've confirmed that the 2nd method of excluding yourself from tracking does indeed work. The problem was that the filter settings weren't properly applied to my profile, which has been corrected. See the accepted answer below.

    Read the article

  • XNA - Moving Background Calculations

    - by Jesse Emond
    Hi, My question is relatively hard to explain(for me, at least), so I'll go one step at a time and just tell me in the comments if it's not clear enough. So I'm making a "Defend Your Castle" type 2D game, where two players own a castle and create units that will move horizontally to try to destroy the opponent's base. Here's a screenshot of the game: The distance between both castles is much bigger in a real game though, bigger than the screen's width actually. Because the distance is bigger than the screen's width, I had to implement a simple 2D camera: Camera2D, which only holds a Location Vector2 (and I always make sure this camera is within the field area). Then, I just move all the game elements(castles, units, health bars) by that location, so that if a unit is at (5, 0), and the camera's location is (5, 0), then the unit's position will be moved by 5 units to the left, making it (0, 0) on the screen. At first, I simply used a static background with mountains and clouds(yeah, those are supposed to be mountains and clouds). Obviously, this looked awful: when you moved the camera, the background would stay immobile. Instead, I'd like to make a moving background, kind of a "scrolling" one. But rather than making a background with the same width as the distance between the castles, I'd like to make one that is a little bit smaller(but still bigger than the screen's width). I thought this would create an effect of "distance" with the background(but it might just look awful, too). Here's the background I'm testing with: I tried different ways, but none of them seems to work. I tried this: float backgroundFieldRatio = BackgroundTexture.Width / fieldWidth;//find the ratio between the background and the field. float backgroundPositionX = -cam.Location.X * backgroundFieldRatio;//move the background to the left When I run this with fieldWith = 1600, BackgroundTexture.Width = 1500 and while looking at the rightmost area, the background is offset to the left by a too big amount, and we can see the black clear color in the back, as you can see here: I hope I explained properly what I'm trying to achieve. Thank you for your time. Note: I didn't know what to look for on Google, so I thought I'd ask here.

    Read the article

  • Cannot install packages. "Warning: untrusted versions..." plus "method driver /usr/lib/apt/methods/http could not be found"

    - by Steve Tjoa
    Judging from Internet forums, these errors appear to be popular when attempting to install packages: steve:~$ sudo aptitude install examplepackage The following NEW packages will be installed: examplepackage examplepackage-common{a} 0 packages upgraded, 2 newly installed, 0 to remove and 0 not upgraded. Need to get 1,834 kB of archives. After unpacking 7,631 kB will be used. Do you want to continue? [Y/n/?] WARNING: untrusted versions of the following packages will be installed! Untrusted packages could compromise your system's security. You should only proceed with the installation if you are certain that this is what you want to do. examplepackage examplepackage-common Do you want to ignore this warning and proceed anyway? To continue, enter "Yes"; to abort, enter "No": Yes E: The method driver /usr/lib/apt/methods/http could not be found. E: The method driver /usr/lib/apt/methods/http could not be found. E: Internal error: couldn't generate list of packages to download I followed this post by uninstalling ubuntu-keyring. But I cannot reinstall ubuntu-keyring or ubuntu-minimal -- the above errors reappear. In fact, I don't even seem to have apt (I must have caused this along the way by trying a bad solution, or maybe a clean): steve:~$ sudo apt-get update sudo: apt-get: command not found Aptitude works, but I can't install apt: steve:~$ sudo aptitude install apt The following NEW packages will be installed: apt 0 packages upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 1,046 kB of archives. After unpacking 3,441 kB will be used. E: The method driver /usr/lib/apt/methods/http could not be found. E: The method driver /usr/lib/apt/methods/http could not be found. E: Internal error: couldn't generate list of packages to download ...or update steve:~$ sudo aptitude update E: The method driver /usr/lib/apt/methods/http could not be found. E: The method driver /usr/lib/apt/methods/http could not be found. E: The method driver /usr/lib/apt/methods/http could not be found. I tried this post. Didn't help. To summarize, the main problem is that I cannot install anything. While attempting to fix the problem, the other aforementioned errors occurred. Can you help me fix this error? Feel free to ask if you need more information. Stats: steve:~$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 11.10 Release: 11.10 Codename: oneiric

    Read the article

< Previous Page | 682 683 684 685 686 687 688 689 690 691 692 693  | Next Page >