Search Results

Search found 11268 results on 451 pages for 'shweta simply'.

Page 276/451 | < Previous Page | 272 273 274 275 276 277 278 279 280 281 282 283  | Next Page >

  • IRM and Consumerization

    - by martin.abrahams
    As the season of rampant consumerism draws to its official close on 12th Night, it seems a fitting time to discuss consumerization - whereby technologies from the consumer market, such as the Android and iPad, are adopted by business organizations. I expect many of you will have received a shiny new mobile gadget for Christmas - and will be expecting to use it for work as well as leisure in 2011. In my case, I'm just getting to grips with my first Android phone. This trend developed so much during 2010 that a number of my customers have officially changed their stance on consumer devices - accepting consumerization as something to embrace rather than resist. Clearly, consumerization has significant implications for information control, as corporate data is distributed to consumer devices whether the organization is aware of it or not. I daresay that some DLP solutions can limit distribution to some extent, but this creates a conflict between accepting consumerization and frustrating it. So what does Oracle IRM have to offer the consumerized enterprise? First and foremost, consumerization does not automatically represent great additional risk - if an enterprise seals its sensitive information. Sealed files are encrypted, and that fundamental protection is not affected by copying files to consumer devices. A device might be lost or stolen, and the user might not think to report the loss of a personally owned device, but the data and the enterprise that owns it are protected. Indeed, the consumerization trend is another strong reason for enterprises to deploy IRM - to protect against this expansion of channels by which data might be accidentally exposed. It also enables encryption requirements to be met even though the enterprise does not own the device and cannot enforce device encryption. Moving on to the usage of sealed content on such devices, some of our customers are using virtual desktop solutions such that, in truth, the sealed content is being opened and used on a PC in the normal way, and the user is simply using their device for display purposes. This has several advantages: The sensitive documents are not actually on the devices, so device loss and theft are even less of a worry The enterprise has another layer of control over how and where content is used, as access to the virtual solution involves another layer of authentication and authorization - defence in depth It is a generic solution that means the enterprise does not need to actively support the ever expanding variety of consumer devices - the enterprise just manages some virtual access to traditional systems using something like Citrix or Remote Desktop services. It is a tried and tested way of accessing sealed documents. People have being using Oracle IRM in conjunction with Citrix and Remote Desktop for several years. For some scenarios, we also have the "IRM wrapper" option that provides a simple app for sealing and unsealing content on a range of operating systems. We are busy working on other ways to support the explosion of consumer devices, but this blog is not a proper forum for talking about them at this time. If you are an Oracle IRM customer, we will be pleased to discuss our plans and your requirements with you directly on request. You can be sure that the blog will cover the new capabilities as soon as possible.

    Read the article

  • Wordpress Installation (on IIS and SQL Server)

    - by Davide Mauri
    To proceed with the installation of Wordpress on SQL Server and IIS, first of all, you need to do the following steps Create a database on SQL Server that will be used by Wordpress Create login that can access to the just created database and put the user into ddladmin, db_datareader, db_datawriter roles Download and unpack Wordpress 3.3.2 (latest version as of 27 May 2012) zip file into a directory of your choice Download the wp-db-abstraction 1.1.4 (latest version as of 27 May 2012) plugin from wordpress.org website Now that the basic action has been done, you can start to setup and configure your Wordpress installation. Unpack and follow the instructions in the README.TXT file to install the Database Abstraction Layer. Mainly you have to: Upload wp-db-abstraction.php and the wp-db-abstraction directory to wp-content/mu-plugins.  This should be parallel to your regular plugins directory.  If the mu-plugins directory does not exist, you must create it. Put the db.php file from inside the wp-db-abstraction.php directory to wp-content/db.php Now you can create an application pool in IIS like the following one Create a website, using the above Application Pool, that points to the folder where you unpacked Wordpress files. Be sure to give the “Write” permission to the IIS account, as pointed out in this (old, but still quite valid) installation manual: http://wordpress.visitmix.com/development/installing-wordpress-on-sql-server#iis Now you’re ready to go. Point your browser to the configured website and the Wordpress installation screen will be there for you. When you’re requested to enter information to connect to MySQL database, simply skip that page, leaving the default values. If you have installed the Database Abstraction Layer, another database installation screen will appear after the one used by MySQL, and here you can enter the configuration information needed to connect to SQL Server. After having finished the installation steps, you should be able to access and navigate your wordpress site.  A final touch, and it’s done: just add the needed rewrite rules http://wordpress.visitmix.com/development/installing-wordpress-on-sql-server#urlrewrite and that’s it! Well. Not really. Unfortunately the current (as of 27 May 2012) version of the Database Abstraction Layer (1.1.4) has some bugs. Luckily they can be quickly fixed: Backslash Fix http://wordpress.org/support/topic/plugin-wp-db-abstraction-fix-problems-with-backslash-usage Select Top 0 Fix Make the change to the file “.\wp-content\mu-plugins\wp-db-abstraction\translations\sqlsrv\translations.php” suggested by “debettap”   http://sourceforge.net/tracker/?func=detail&aid=3485384&group_id=315685&atid=1328061 And now you have a 100% working Wordpress installation on SQL Server! Since I also wanted to take advantage of SQL Server Full Text Search, I’ve created a very simple wordpress plugin to setup full-text search and to use it as website search engine: http://wpfts.codeplex.com/ Enjoy!

    Read the article

  • Using Movemail with Thunderbird on Ubuntu

    - by rxt
    I'm trying to read local mail with Thunderbird on Ubuntu (with both 12.04 and 13.04). I've followed the instructions found here: How can I access system mail in /var/mail/ via thunderbird? I can read mail on the system using alpine or vim, so I know the mailbox is not empty. When I click the get-mail button, nothing happens. I see no Inbox (or any folder structure) for the specific account. I've set the rights for /var/mail to 1777. Settings server name: localhost username: john How can I get this working? Okay, considering the extra bounty, I would like to get this working like normal mail. The accepted answer from Qasim resulted in a much more usable situation than before - opening mail in Thunderbird with layout. I still face three problems though. When new mail is received in the mailbox, Thunderbird won't see this until after I restart Thunderbird. When Thunderbird is restarted, all mail is reset to unread and deleted mail is undone. This is probably because Thunderbird reads the mail from the /var/mail/www-data file, but doesn't update this file. So after restarting, it simply reads this file again, with the new mail and all old mail. This is probably a postfix issue: mail is sent out to existing mail addresses, but cannot be delivered because the receiving mailserver cannot be reached. This results in "Undelivered mail returned to sender". Only one mailserver can be reached: localhost. Because this is a test system, I don't want real customers to receive mail. I've blocked mail ports in UFW to be sure. When opening the returned mail, I can scroll down and then I see the original mail with proper layout. So I can read the mail, see if the proper images are included, and for me that's workable. Having to restart TB to read new mail - I know when new mail arrives, so I know when to restart. Having old mail restored after a restart - not big problem as well. I can delete the mail file if it gets too much. I know how it works, but it would be nice if it worked like normal.

    Read the article

  • Opening cursor files in a graphics editor?

    - by sdaau
    I'm looking at /usr/share/icons/DMZ-White/cursors, and there is: $ tree -s /usr/share/icons/DMZ-White/ /usr/share/icons/DMZ-White/ +-- [ 4096] cursors ¦   +-- [ 14] 00008160000006810000408080010102 -> v_double_arrow ... ¦   +-- [ 5] 9d800788f1b08800ae810202380a0822 -> hand2 ¦   +-- [ 8] arrow -> left_ptr ¦   +-- [ 15776] bd_double_arrow ¦   +-- [ 15776] bottom_left_corner ¦   +-- [ 15776] bottom_right_corner ¦   +-- [ 15776] bottom_side ... ... a bunch of files without extension, that GIMP cannot open. Is there an editor where these files can be opened - or at least a converter to something like .png? I can note that ImageMagick display also failed to open these files... Found also Gursor Maker - Cursor Editor for X11/GTK+; got the CVS code from SourceForge - it still uses Numeric (the old name of numpy), so to run it, you'll have to do: #from Numeric import * from numpy import * ... in xcurio.py, curxp.py, gimp.py, colorfunc.py - and comment the #from xml.dom.ext.reader import Sax2 in lsproj.py. With that, I got it running 11.04: ... but cannot get any files to open? So I thought I should grep for paths, nothing much came up - and when I looked into cursordefs.py, I simply had to paste this: CURSOR_ICON = gtk.gdk.pixbuf_new_from_xpm_data([ "10 16 3 1", " c None", ". c #000000", "+ c #FFFFFF", ".. ", ".+. ", ".++. ", ".+++. ", ".++++. ", ".+++++. ", ".++++++. ", ".+++++++. ", ".++++++++.", ".+++++....", ".++.++. ", ".+. .++. ", ".. .++. ", " .++. ", " .++. ", " .. "]) Heh :) In any case, doesn't look like it will be much usable on newer Ubuntus, unfortunately... Just tested XMC plugin as well - on 11.04, has to be built from source (from the link in the accepted answer); the requirements on my system resolved to: sudo apt-get install libgimp2.0-dev libglib2.0-0-dbg libglib2.0-0-refdbg libglib2.0-cil-dev libgtk2.0-0-dbg libgtk2.0-cil-dev ... after that, the configure/make procedure in the INSTALL file works. Note that this plugin is a bit "sneaky": ... that is, you should use "All files" (as there are no extensions); cursor previews at first will not be rendered. Then open one cursor file; after it has been opened, then there is a preview in the File/Open dialog; but other than that, it works fine...

    Read the article

  • Stop YouTube Videos from Automatically Playing in Chrome

    - by The Geek
    If you’ve actually used the internet before, you’ve probably come across a page with an auto-playing YouTube clip, and chances are good it was a rather annoying one. Here’s how to stop them from starting automatically in Chrome. We’ve already told you how to stop them from automatically playing if you’re a Firefox user (best answer: use Flashblock!), but now it’s time for Chrome users to get their turn. Use the Stop Autoplay for YouTube Extension The great thing about this extension is that it stops the video from playing, but it allows it to continue buffering, so when you do feel like playing the video, it’ll already be downloaded—really useful for people with slower internet connections. There’s no UI or anything fancy, just head to the extension page and click the Install button. If you want to get rid of it later, use the Tools –> Extensions menu (or you can type chrome://extensions/ into your address bar), and then click the Uninstall link for that add-on.   Download Stop Autoplay for YouTube [Google Chrome Extensions] Using FlashBlock for Chrome If you really wanted to, you could just disable Flash across the board using FlashBlock for Chrome. Once you’ve installed the extension, you won’t see any Flash elements anywhere, and you’ll have to move your mouse over them and click to enable them each time. When I installed the extension the first time, I noticed that YouTube was already in the allow list. I’m not sure if that’s the default setting or not, but you can use the icon in the address bar, or the Options from the Extensions panel to get to the settings page, and from there you can remove anything from the White List that you wouldn’t want. Another nice feature about Flash Block is that it can also block Silverlight, or you could simply uninstall or remove unnecessary Chrome plug-ins. Download FlashBlock for Chrome Similar Articles Productive Geek Tips Stop YouTube Videos from Automatically Playing in FirefoxDisable YouTube Comments while using ChromeApologies About An Awful Audio AdvertisementImprove YouTube Video Viewing in Google ChromeWatch YouTube Videos in Cinema Style in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites

    Read the article

  • Diagram to show code responsibility

    - by Mike Samuel
    Does anyone know how to visually diagram the ways in which the flow of control in code passes between code produced by different groups and how that affects the amount of code that needs to be carefully written/reviewed/tested for system properties to hold? What I am trying to help people visualize are arguments of the form: For property P to hold, nd developers have to write application code, Ca, without certain kinds of errors, and nm maintainers have to make sure that the code continues to not have these kinds of errors over the project lifetime. We could reduce the error rate by educating nd developers and nm maintainers. For us to be confident that the property holds, ns specialists still need to test or check |Ca| lines of code and continue to test/check the changes by nm maintainers. Alternatively, we could be confident that P holds if all code paths that could violate P went through tool code, Ct, written by our specialists. In our case, test suites alone cannot give confidence that P holdsnd » nsnm ns|Ca| » |Ct| so writing and maintaining Ct is economical, frees up our developers to worry about other things, and reduces the ongoing education commitment by our specialists. or those conditions do not hold, so focusing on education and testing is preferable. Example 1 As a concrete example, suppose we want to ensure that our web-service only produces valid JSON output. Our web-service provides several query and mutation operators that can be composed in interesting ways. We could try to educate everyone who maintains those operations about the JSON syntax, the importance of conformance, and libraries available so that when they write to an output buffer, every possible sequence of appends results in syntactically valid JSON. Alternatively, we don't expose an output stream handle to application code, and instead expose a JSON sink so that every code path that writes a response is channeled through a JSON sink that is written and maintained by a specialist who knows JSON syntax and can use well-written libraries to produce only valid output. Example 2 We need to make sure that a service that receives a URL from an untrusted source and tries to fetch its content does not end up revealing sensitive files from the file-system, like file:///etc/passwd. If there is a single standard way that any developer familiar with the application language's libraries would use to fetch URLs, which has file-system access turned off by default, then simply educating developers about the standard mechanism, and testing that file probing fails for some inputs, will probably be sufficient.

    Read the article

  • How to do geometric projection shadows?

    - by John Murdoch
    I have decided that since my game world is mostly flat I don't need better shadows than geometric projections - at least for now. The only problem is I don't even know how to do those properly - that is to produce a 4x4 matrix which would render shadows for my objects (that is, I guess, project them on a horizontal XZ plane). I would like a light source at infinity (e.g., the sun at some point in the sky) and thus parallel projection. My current code does something that looks almost right for small flying objects, but actually is a very rude approximation, as it doesn't project the objects onto the ground, but simply moves them there (I think). Also it always wrongly assumes the sun is always on the zenith (projecting straight down). Gdx.gl20.glEnable(GL10.GL_BLEND); Gdx.gl20.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA); //shells shellTexture.bind(); shader.begin(); for (ShellState state : shellStates.values()) { transform.set(camera.combined); transform.mul(state.transform); shader.setUniformMatrix("u_worldView", transform); shader.setUniformi("u_texture", 0); shellMesh.render(shader, GL10.GL_TRIANGLES); } shader.end(); // shadows shader.begin(); for (ShellState state : shellStates.values()) { transform.set(camera.combined); m4.set(state.transform); state.transform.getTranslation(v3); m4.translate(0, -v3.y + 0.5f, 0); // TODO HACK: + 0.5f is a hack to ensure the shadow appears above the ground; this is overall a hack as we are just moving the shell to the surface instead of projecting it on the surface! transform.mul(m4); shader.setUniformMatrix("u_worldView", transform); shader.setUniformi("u_texture", 0); // TODO: make shadow black somehow shellMesh.render(shader, GL10.GL_TRIANGLES); } shader.end(); Gdx.gl.glDisable(GL10.GL_BLEND); So my questions are: a) What is the proper way to produce a Matrix4 to pass to openGL which would render the shadows for my objects? b) I am supposed to use another fragment shader for the shadows which would paint them in semi-transparent grey, correct? c) The limitation of this simplistic approach is that whenever there is some object on the ground (it is not flat) the shadows will not be drawn, correct? d) Do I need to add something very small to the y (up) coordinate to avoid z-fighting with ground textures? Or is the fact they will be semi-transparent enough to resolve that problem?

    Read the article

  • How to Modify a Signature for Use in Plain Text Emails in Outlook 2013

    - by Lori Kaufman
    If you’ve created a signature with an image, links, text formatting, or special characters, the signature will not look the same in plain text formatted emails as it does in HTML format. As the name suggests, Plain Text does not support any type of formatting. For example, if you include an image in your signature, as shown below, the plain text version will be blank. Active links in HTML signatures will be converted to just the text of the link in plain text emails. The How-To Geek link in the image below will become simply How-To Geek and will look like the rest of the text in the signature. The same thing is true in the following example. The active links are stripped from the text. The picture of the envelope that was inserted using the Wingdings font will only display as the plain text character associated with it. There are times you may need to send email in Plain Text format, but still include your signature. You can edit the plain text version of your signature to make it look good in plain text emails by manually editing the text file. To do this, click the File tab. Click Options in the menu list on the left side of the Account Information screen. On the Outlook Options dialog box, click Mail in the list of options on the left side of the dialog box. In the Compose messages section, press and hold the Ctrl key and click the Signatures button. This opens the Signatures folder containing the files used to insert signatures into emails. The .txt file version of each signature is used when inserting a signature into a plain text email. Double-click on a .txt file for the signature you want to edit to open it in Notepad, or your default text editor. Notice that the links on “How-To Geek” and “Email me” are gone and the envelope typed using the Wingdings font was converted to an “H.” Edit the text file to remove extra characters, replace images, and provide full web and email links. Save the text file. Create a new mail message and select the edited signature, if it’s not the default signature for the current email account. To convert the email to plain text, click the Format Text tab and click Plain Text in the Format section. The Microsoft Outlook Compatibility Checker displays telling you that Formatted text will become plain text. Click Continue. The HTML version of your signature is converted to the plain text version. NOTE: You should make a backup of the .txt signature file you edited, as this file will change again when you change your signature in the Signature Editor.     

    Read the article

  • XNA - Moving Background Calculations

    - by Jesse Emond
    Hi, My question is relatively hard to explain(for me, at least), so I'll go one step at a time and just tell me in the comments if it's not clear enough. So I'm making a "Defend Your Castle" type 2D game, where two players own a castle and create units that will move horizontally to try to destroy the opponent's base. Here's a screenshot of the game: The distance between both castles is much bigger in a real game though, bigger than the screen's width actually. Because the distance is bigger than the screen's width, I had to implement a simple 2D camera: Camera2D, which only holds a Location Vector2 (and I always make sure this camera is within the field area). Then, I just move all the game elements(castles, units, health bars) by that location, so that if a unit is at (5, 0), and the camera's location is (5, 0), then the unit's position will be moved by 5 units to the left, making it (0, 0) on the screen. At first, I simply used a static background with mountains and clouds(yeah, those are supposed to be mountains and clouds). Obviously, this looked awful: when you moved the camera, the background would stay immobile. Instead, I'd like to make a moving background, kind of a "scrolling" one. But rather than making a background with the same width as the distance between the castles, I'd like to make one that is a little bit smaller(but still bigger than the screen's width). I thought this would create an effect of "distance" with the background(but it might just look awful, too). Here's the background I'm testing with: I tried different ways, but none of them seems to work. I tried this: float backgroundFieldRatio = BackgroundTexture.Width / fieldWidth;//find the ratio between the background and the field. float backgroundPositionX = -cam.Location.X * backgroundFieldRatio;//move the background to the left When I run this with fieldWith = 1600, BackgroundTexture.Width = 1500 and while looking at the rightmost area, the background is offset to the left by a too big amount, and we can see the black clear color in the back, as you can see here: I hope I explained properly what I'm trying to achieve. Thank you for your time. Note: I didn't know what to look for on Google, so I thought I'd ask here.

    Read the article

  • Game Review: God of Light

    Luckily I came across this title at a very early stage. If I remember correctly, I took notice of God of Light on Twitter right on the weekend it has been published on the Play Store. "Sit back and become immersed into the world of God of Light, the game that rethinks the physics puzzle genre with its unique environment exploration gameplay, amazing graphics and exclusive soundtrack created by electronic music icon UNKLE. Join cute game mascot, Shiny, on his way to saving the universe from the impending darkness. Play through a variety of exciting game worlds and dozens of levels with mind-blowing puzzles. Your goal is to explore game levels, seek for game objects that reflect, split, combine, paint, bend and teleport rays of light energy to activate the Sources of Life and bring light back to the universe." Mastering the various reflection items in God of Light is very easy to learn and new elements are introduced during the game. Amazing puzzle game Here's the initial review I posted on the Play Store: "Great change in puzzles Fantastic and refreshing concept of puzzle solving. The effects and the music match very well, putting the player in the right mood to game. Get enlightened and grow your skills until you are a true God of Light." And it remains true, even after completing the first realm completely. Similar to Quell it took me only a couple of hours during the evening to complete all levels in the available three realms, unfortunately. God of Light currently consists of 75 levels, well it's 25 in each realm to be precise, and the challenges are increasing. Compared to the iOS version from the AppStore, God of Light is available for free on Android - at least the first realm (25 levels). Unlocking the other two remaining realms is done through an in-app purchase. The visual appearance, the sound effects and the background music provided by UNKLE makes God of Light a superb package for any puzzle gamer. Whether it is simply reflecting light over multiple mirrors, or later on bending the rays of light with black holes, or using prisms to either split, enforce, or colourise your beam, God of Light is great fun and offers a good amount of joy. Check out the following screenshots for some impressions. God of Light: Astonishing graphics and visual appeal throughout the game God of Light - Introduction to the game during the first levels. New light items are introduced at each stage during the game play God of Light: Increasing complexity and puzzle fun Hopefully, Playmous is going to provide more astonishing looking realms and interesting gimmicks in future versions. Play Store: God of Light Also, check out the latest game updates on the official web site of Playmous

    Read the article

  • SharePoint 2010 Hosting :: Sending SMS Alerts in SharePoint 2010 Over Office Mobile Service Protocol (OMS)

    - by mbridge
    In this post, I want to share the exciting news of SharePoint's 2010 new feature. Finally it's possible to send SMS directly from SharePoint to mobile phones. The advantages of sending SMS instead of Email messages are obvious: SMS alerts or reminders that are received on mobile phones are more preferred than Email messages that can be lost in the mass of spam. The interface is standard as it's very similar to previous versions of the product. Adjustments are easy to do, simply enter the address of the Office Mobile Service (OMS) web-service which you want to use for sending messages, then specify the connection parameters. Further details on Office Mobile Service is available below. The Test Service button checks if OMS web-service is accessible using provided URL (user name and password are not verified). This check is needed because OMS web-service URL depends on the mobile operator and country. It's now possible to select the method of sending alerts in alerts settings. Email option is selected by default. Alerts delivery method is displayed in the list of existing alerts. Office Mobile Service (OMS) SharePoint 2010 uses exterior servers similar to SMTP servers for sending SMS alerts. However, Microsoft started development and promotion of their own protocol instead of using existing ones. That is how Office Mobile Service (OMS) appeared. This open protocol enables clients to send text and multimedia messages (mobile messages) remotely to the server which processes these messages and delivers them to mobile phones.  Typical scenario of utilizing this protocol is data transfer between computer application and mobile phone. The recipient can answer messages and the server in return will deliver the answer by SMTP protocol, i.e. by email.  Key quality of this protocol is that it's built on base of HTPP(S) and SOAP protocols.     This means that in fact SMS gateway must support typified web-service. What do you get from web-service? What you get is the ability to send SMS from any platform you want.  The protocol is being developed at the moment and version 0.2 from 08/28/2009 was available when the article was published.  For promotion of their protocol and simplifying server search, Microsoft represented web-service http://messaging.office.microsoft.com/HostingProviders.aspx that helps to receive the list of providers, which supports OMS protocol and message delivery to your operator.  All you need to do is decide which provider to use, complete the agreement, then adjust the SharePoint connection parameters and start working.  Some providers advertise themselves not only for clients but for mobile operators as well. They offer automatic adding to the list of the Office Mobile Service Providers.  To view the full specifications of OMS, please go to http://msdn.microsoft.com/en-us/library/dd774103.aspx.

    Read the article

  • Chargeback and showback...both a 'throw back'

    - by llaszews
    Been getting asked again by customers and partners about chargeback and showback in the cloud so thought I would blog on my response to this question. Charge Back background, information and industry analysis: Cloud computing is all about shared resources. These shared resources are computer servers (including memory and CPU), network devices, hard disk storage, database servers, application servers, cooling, floor space, electricity and more. These resources are shared by departments within a company, or by a number of companies, when resources are hosted in the public or hybrid cloud. Currently, hosting providers that run other companies on their cloud platforms do not have an accurate way to measure the shared computing resources used by a specific user let alone used by a specific customer. Additionally, companies running their own cloud data centers, for private or hybrid clouds, have no way of measure and charging back the departments in the company that are using these shared cloud resources. In both cases, the lack of determine shared resource costs and to charge them back to the company, department or user that is using this resources is limited a clear measure of business benefit and impacting company’s ability to measure the Return on Investment (ROI). An IT chargeback system is an accounting strategy that applies the costs of IT services, hardware or software to the business unit in which they are used. This system contrasts with traditional IT accounting models in which a centralized department bears all of the IT costs in an organization and those costs are treated simply as corporate overhead. Showback involves showing the IT costs to a department or customer but not actually charging them for their IT usage. Showback is a gradual method of introducing chargeback into an enterprise. Most companies implement a show back mechanism before a full chargeback system is put in place. Oracle chargeback product: Oracle Enterprise Manager provides tools for defining detailed Chargeback plans spanning different metrics collected for each type of resources as well as defining Cost Centers for grouping costs across multiple developers. Chargeback plans can use not only usage based costs, but also configuration based costs (e.g. version of the platform) or fixed costs (e.g. flat-rate management fee). Chargeback has rich out of the box reports. Trending reports show how charge and resource consumption varies over time, while Summary reports show the breakdown of charges or usage by different dimensions such as Cost Center or Target Type. These reports help consumers in understanding how their charges relate to their consumption and also assist the IT department with budgeting and planning activities. With BI Publisher, the reports can be made available in a variety of formats such as PDF, HTML, Word, Excel or PowerPoint.

    Read the article

  • Encrypting your SQL Server Passwords in Powershell

    - by laerte
    A couple of months ago, a friend of mine who is now bewitched by the seemingly supernatural abilities of Powershell (+1 for the team) asked me what, initially, appeared to be a trivial question: "Laerte, I do not have the luxury of being able to work with my SQL servers through Windows Authentication, and I need a way to automatically pass my username and password. How would you suggest I do this?" Given that I knew he, like me, was using the SQLPSX modules (an open source project created by Chad Miller; a fantastic library of reusable functions and PowerShell scripts), I merrily replied, "Simply pass the Username and Password in SQLPSX functions". He rather pointed responded: "My friend, I might as well pass: Username-'Me'-password 'NowEverybodyKnowsMyPassword'" As I do have the pleasure of working with Windows Authentication, I had not really thought this situation though yet (and thank goodness I only revealed my temporary ignorance to a friend, and the embarrassment was minimized). After discussing this puzzle with Chad Miller, he showed me some code for saving passwords on SQL Server Tables, which he had demo'd in his Powershell ETL session at Tampa SQL Saturday (and you can download the scripts from here). The solution seemed to be pretty much ready to go, so I showed it to my Authentication-impoverished friend, only to discover that we were only half-way there: "That's almost what I want, but the details need to be stored in my local txt file, together with the names of the servers that I'll actually use the Powershell scripts on. Something like: Server1,UserName,Password Server2,UserName,Password" I thought about it for just a few milliseconds (Ha! Of course I'm not telling you how long it actually took me, I have to do my own marketing, after all) and the solution was finally ready. First , we have to download Library-StringCripto (with many thanks to Steven Hystad), which is composed of two functions: One for encryption and other for decryption, both of which are used to manage the password. If you want to know more about the library, you can see more details in the help functions. Next, we have to create a txt file with your encrypted passwords:$ServerName = "Server1" $UserName = "Login1" $Password = "Senha1" $PasswordToEncrypt = "YourPassword" $UserNameEncrypt = Write-EncryptedString -inputstring $UserName -Password $PasswordToEncrypt $PasswordEncrypt = Write-EncryptedString -inputstring $Password -Password $PasswordToEncrypt "$($Servername),$($UserNameEncrypt),$($PasswordEncrypt)" | Out-File c:\temp\ServersSecurePassword.txt -Append $ServerName = "Server2" $UserName = "Login2" $Password = "senha2" $PasswordToEncrypt = "YourPassword" $UserNameEncrypt = Write-EncryptedString -inputstring $UserName -Password $PasswordToEncrypt $PasswordEncrypt = Write-EncryptedString -inputstring $Password -Password $PasswordToEncrypt "$($Servername),$($UserNameEncrypt),$($PasswordEncrypt)" | Out-File c:\temp\ ServersSecurePassword.txt -Append .And in the c:\temp\ServersSecurePassword.txt file which we've just created, you will find your Username and Password, all neatly encrypted. Let's take a look at what the txt looks like: .and in case you're wondering, Server names, Usernames and Passwords are all separated by commas. Decryption is actually much more simple:Read-EncryptedString -InputString $EncryptString -password "YourPassword" (Just remember that the Password you're trying to decrypt must be exactly the same as the encrypted phrase.) Finally, just to show you how smooth this solution is, let's say I want to use the Invoke-DBMaint function from SQLPSX to perform a checkdb on a system database: it's just a case of split, decrypt and be happy!Get-Content c:\temp\ServerSecurePassword.txt | foreach { [array] $Split = ($_).split(",") Invoke-DBMaint -server $($Split[0]) -UserName (Read-EncryptedString -InputString $Split[1] -password "YourPassword" ) -Password (Read-EncryptedString -InputString $Split[2] -password "YourPassword" ) -Databases "SYSTEM" -Action "CHECK_DB" -ReportOn c:\Temp } This is why I love Powershell.

    Read the article

  • CIO's Corner: Achieving a Balance

    - by Michelle Kimihira
    Author: Rick Beers Senior Director, Product Management, Oracle Fusion Middleware All too often, a CIO is unfairly characterized as either technology-focused or business-focused; as more concerned with either infrastructure performance or business excellence. It seems to me that this completely misses the point. I have long thought that a CIO has probably the most complex C-level position in an enterprise, one that requires an artful balance among four entirely different constituencies, often with competing values and needs. How a CIO balances these is the single largest determinant of success. I was reminded of this while reading the excellent interview of Mark Hurd by CNBC’s Maria Bartiromo in a recent issue of USATODAY (Bartiromo: Oracle's Hurd is in tech sweet spot). The interview covers topics such as Big Data, Leadership and Oracle’s growth strategy. But the topic that really got my interest, and reminded me of the need for balance, was on IT spending trends, in which Mark Hurd observed, “…budgets are tight. What most of our customers have today is both an austerity plan to save money and at the same time a plan to reapply that money to innovation. There isn't a customer we have that doesn't have an austerity plan and an innovation plan.” In an era of economic uncertainty, and an accelerating pace of business change, this is probably the toughest balance a CIO must achieve. Yet for far too many IT organizations, operating costs consume over 75% of their budgets, leaving precious little for innovation and investment in business-critical technology programs. I have found that many CIO’s are trapped by their enterprise systems platforms, which were originally architected for Standardization, Compliance and tightly integrated linear Workflows. Yes, these traits are still required for specific reasons and cannot be compromised. But they are no longer enough. New demands are emerging: the explosion in the volume and diversity of Data, the Consumerization of IT, the rise of Social Media, and the need for continual Business Process Reengineering. These were simply not the design criteria for Enterprise 1.0 and attempting to leverage them with current systems platforms results in an escalation in complexity and a resulting increase in operating costs for many IT organizations. This is the cost vs investment trap and what most constrains CIO’s from achieving the balance they need. But there is a way out of this trap. Enterprise 2.0 represents an entirely new enterprise systems architecture, one that is ‘Business-Centric’ rather than ‘ERP Centric’, which defined the architecture of Enterprise 1.0. Oracle’s best in class suite of Fusion Middleware Products enables a layered approach to enterprise systems architectures that provides the balance that an enterprise needs. The most exciting part of all this? The bottom two layers are focused upon reducing costs and the upper two layers provide business value and innovation. Finally, the Balance a CIO needs.  Additional Information Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • How To Completely Disable Subtitles in VLC

    - by The Geek
    If you watch a lot of videos using VLC, you might have noticed that it enables subtitles by default if they are there, which can be pretty annoying at times. Here’s the quick tip to disable them entirely. Of course, you can always turn them back on if you want on an individual video basis. Disable Subtitles Head into the VLC preferences, and then click the All button at the bottom of the screen. On the left-hand side, choose Video –> Subtitles/OSD, and then uncheck the boxes for “Autodetect subtitle files”, Enable sub-pictures, and On Screen Display. That should do it, unless the subtitles are forced in the video for some reason. Note: Certain video formats like MKV can sometimes have subtitles enabled even though there isn’t a separate subtitles file. This is why you need to remove “Enable sub-pictures” as well, which totally disables the on-screen text. You can choose to only uncheck the autodetecting of subtitles instead if you’d prefer. And of course, you can simply right-click on the video, head to Video –> Subtitles Track and then choose the subtitles if you still wanted them. Note: this only works if the “enable sub-pictures” option is still enabled. And thus ends the tale of disabling those fracking subtitles. Starbuck approves. Similar Articles Productive Geek Tips You Really Want to Completely Disable Tabs in Firefox?Disable ProFTP on CentOSDisable Notification Balloons in XPHow To (Really) Completely Disable UAC on Windows 7Disable User Account Control (UAC) the Easy Way on Win 7 or Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Quickly Schedule Meetings With NeedtoMeet Share Flickr Photos On Facebook Automatically Are You Blocked On Gtalk? Find out Discover Latest Android Apps On AppBrain The Ultimate Guide For YouTube Lovers Will it Blend? iPad Edition

    Read the article

  • SCCM 2007 Collections per OU

    - by VirtualizeIT
    Recently I wanted to create our SCCM collections setup as our Active Directory structure. I finally figured out how to create collections per OU of the domain. I decided to create a simple tutorial that may help other IT professionals the steps to complete this task.   1. Open the ConfigMgr and navigation to the collections. To navigate to the collections go to Site Database>Computer Management>Collections. 2. In the ‘Collections’ right-click and select New Collections. Then it will pop up a Wizards so you can enter the name of the collection and any notes that you may want to add that is associated with the collection.                       3. Next, select the database icon. In the ‘Name’ textbox enter the name of the query. I named mine ‘Query’ just for simplicity sake. After you enter the name select ‘Edit Query Statement…’ 4. Select the ‘Criteria’ tab 5. Select the icon that looks like a sun. 6. At this point you should see a dialog box like this…                     7. Next, click the ‘select’ button. 8. Under the ‘Attribute class’ scroll through until you see ’System Resource’ and for the ‘Attribute"’ scroll through you see ‘System OU Name’. It should look something like this…                 9. After that select OK. 10. In the ‘Value’ textbox enter the string that is associated with the OU in your domain. NOTE: If you don’t know your string name for your OU you can simply go to “Active Directory Users and Computers” and right-click on the OU and select properties. In the ‘object’ tab you should see the string under the ‘Canonical name of object”. That is the string that you put in the ‘Value’ text box. 11. After you enter the OU string name press OK>OK>OK>NEXT>NEXT>FINISH.   That’s it!   I hope this tutorial has help you understand how to create a collection through your OU structure.

    Read the article

  • Build Dependencies and Silverlight 4

    - by Kyle Burns
    At my current position, I’ve been doing quite a bit of Silverlight development and have also been working with TFS2010 build services to enable continuous integration.  One of the critical pieces of a successful continuous build setup (and also one of the benefits of having one) is that the build system should be able to “get latest” against the source repository and immediately build with no errors.  This can break down both in an automated build scenario and a “new guy” scenario when the solution has external dependencies that may not be present in the build environment. The method that I use to address the dependency issue is to store all of the binaries upon which my solution depends in a folder under the solution root called “Reference Items”.  I keep this folder as part of the solution and check all of the binaries into source control so when I get the latest version of the solution from source control all of the binaries are downloaded to my machine as well and gets me closer to the ideal where a new developer installs the development IDE, get latest and can immediately build and run unit tests before jumping into coding the feature of the day. This all sounds pretty good (and it is), but a little while back I ran into one of those little hiccups that requires a little manual intervention.  The issue that I ran into is that with Silverlight (at least version 4), the behavior of the “Add Reference” command when adding reference to a DLL that is present in the GAC is to omit the HintPath element that it includes with regular .Net projects, so even if the DLL is setting in the Reference Items folder and downloaded to the build machine it cannot be found at compile time and the build will fail. To work around this behavior, you need to be comfortable editing the XML project files generated by Visual Studio (in my case this is typically a .csproj file).  Simply open the project file in your favorite text editor, find the Reference element that refers to the component, and modify the XML to include the HintPath.  Here’s a before and after example of the component that ultimately led me to the investigation behind this post: Before: <Reference Include="Telerik.Windows.Controls, Version=2011.2.920.1040, Culture=neutral, PublicKeyToken=5803cfa389c90ce7, processorArchitecture=MSIL" /> After: <Reference Include="Telerik.Windows.Controls, Version=2011.2.920.1040, Culture=neutral, PublicKeyToken=5803cfa389c90ce7, processorArchitecture=MSIL">       <HintPath>..\Reference Items\Telerik.Windows.Controls.dll</HintPath>     </Reference>

    Read the article

  • MOSS 2007 WSP Retraction 'Error"

    - by juanlarios
    This one is a quick post , but I thought I would post this information as I could not find anything that helped me on this specific scenario. Please read the entire article before taking action as there are some irreversable or very troublesome routes I caution about! Problem: I had a client trying to retract a WSP from Central Admin and would eventually go to an, 'Error' State. I could not retract it and after looking at event logs I figured it was a problem with security. I tried several accounts, checked the databases to see if there was some issue with readonly databases and nothing was working.   Solution: Delete the solution from central admin! Yes, I said it. With StsAdm , just delete the solution from Central Admin using this command: "C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\BIN\STSADM.exe" -o deletesolution -name "yoursolution.wsp" What has just happened is that Central Admin does not know about the WSP anymore but the feature and any deployed files are still on the server. For whatever reason SharePoint was not able to retract the files as it normally does. Now you can do one of two things, you can add the solution again to central admin and deploy overtop of the deployed files so it overrides them, or simply clean up the files manually. I re-added the solution through stsadm, but then deployed through stsadm using the -force option in the command. This overrides the existing files on the server. If you deploy through Central admin it will tell you you need the -force option that is not offered as part of the UI in central admin. Use the following command: "C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\BIN\STSADM.exe" -o deploysolution -name "YourSolution.wsp" -immediate -allowgacdeployment -force Just to make sure everything was good, I retracted to solution again, and it worked! then I deleted the solution from central admin alltogether. Then I checked the server and noticed all the files that were deployed with the WSP were cleaned up properly. I then re-added the new WSP the client was looking to install (an Updated WSP). Conclusion: I have no idea why it was not able to retract, but I have seen this several times. I don't know if has to do with security of certain accounts. Althought it's anoying at times, it is fairly easy to fix if you have good instructions. Hope it helps you out!   ***WORD OF CAUTION - if you clean up the files manually you might want to uninstall the features through STSADM commands as SharePOint might still recognize the features that were deployed as the WSP. You might not want to get into the mess of deleting files that are still part of activated or installed Features. THis is why I suggest doing what I did.

    Read the article

  • Export all SSIS packages from msdb using Powershell

    - by jamiet
    Have you ever wanted to dump all the SSIS packages stored in msdb out to files? Of course you have, who wouldn’t? Right? Well, at least one person does because this was the subject of a thread (save all ssis packages to file) on the SSIS forum earlier today. Some of you may have already figured out a way of doing this but for those that haven’t here is a nifty little script that will do it for you and it uses our favourite jack-of-all tools … Powershell!!   Imagine I have the following package folder structure on my Integration Services server (i.e. in [msdb]): There are two packages in there called “20110111 Chaining Expression components” & “Package”, I want to export those two packages into a folder structure that mirrors that in [msdb]. Here is the Powershell script that will do that:   Param($SQLInstance = "localhost") #####Add all the SQL goodies (including Invoke-Sqlcmd)##### add-pssnapin sqlserverprovidersnapin100 -ErrorAction SilentlyContinue add-pssnapin sqlservercmdletsnapin100 -ErrorAction SilentlyContinue cls $Packages = Invoke-Sqlcmd -MaxCharLength 10000000 -ServerInstance $SQLInstance -Query "WITH cte AS ( SELECT cast(foldername as varchar(max)) as folderpath, folderid FROM msdb..sysssispackagefolders WHERE parentfolderid = '00000000-0000-0000-0000-000000000000' UNION ALL SELECT cast(c.folderpath + '\' + f.foldername as varchar(max)), f.folderid FROM msdb..sysssispackagefolders f INNER JOIN cte c ON c.folderid = f.parentfolderid ) SELECT c.folderpath,p.name,CAST(CAST(packagedata AS VARBINARY(MAX)) AS VARCHAR(MAX)) as pkg FROM cte c INNER JOIN msdb..sysssispackages p ON c.folderid = p.folderid WHERE c.folderpath NOT LIKE 'Data Collector%'" Foreach ($pkg in $Packages) { $pkgName = $Pkg.name $folderPath = $Pkg.folderpath $fullfolderPath = "c:\temp\$folderPath\" if(!(test-path -path $fullfolderPath)) { mkdir $fullfolderPath | Out-Null } $pkg.pkg | Out-File -Force -encoding ascii -FilePath "$fullfolderPath\$pkgName.dtsx" }   To run it simply change the “localhost” parameter of the server you want to connect to either by editing the script or passing it in when the script is executed. It will create the folder structure in C:\Temp (which you can also easily change if you so wish – just edit the script accordingly). Here’s the folder structure that it created for me: Notice how it is a mirror of the folder structure in [msdb]. Hope this is useful! @Jamiet

    Read the article

  • Backup Azure Tables with the Enzo Backup API

    - by Herve Roggero
    In case you missed it, you can now backup (and restore) Azure Tables and SQL Databases using an API directly. The features available through the API can be found here: http://www.bluesyntax.net/backup20api.aspx and the online help for the API is here: http://www.bluesyntax.net/EnzoCloudBackup20/APIIntro.aspx. Backing up Azure Tables can’t be any easier than with the Enzo Backup API. Here is a sample code that does the trick: // Create the backup helper class. The constructor automatically sets the SourceStorageAccount property StorageBackupHelper backup = new StorageBackupHelper("storageaccountname", "storageaccountkey", "sourceStorageaccountname", "sourceStorageaccountkey", true, "apilicensekey"); // Now set some properties… backup.UseCloudAgent = false;                                       // backup locally backup.DeviceURI = @"c:\TMP\azuretablebackup.bkp";    // to this file backup.Override = true; backup.Location = DeviceLocation.LocalFile; // Set optional performance options backup.PKTableStrategy.Mode = BSC.Backup.API.TableStrategyMode.GUID; // Set GUID strategy by default backup.MaxRESTPerSec = 200; // Attempt to stay below 200 REST calls per second // Start the backup now… string taskId = backup.Backup(); // Use the Environment class to get the final status of the operation EnvironmentHelper env = new EnvironmentHelper("storageaccountname", "storageaccountkey", "apilicensekey"); string status = env.GetOperationStatus(taskId);   As you can see above, the code is straightforward. You provide connection settings in the constructor, set a few options indicating where the backup device will be located, set optional performance parameters and start the backup. The performance options are designed to help you backup your Azure Tables quickly, while attempting to keep under a specific threshold to prevent Storage Account throttling. For example, the MaxRESTPerSec property will attempt to keep the overall backup operation under 200 rest calls per second. Another performance option if the Backup Strategy for Azure Tables. By default, all tables are simply scanned. While this works best for smaller Azure Tables, larger tables can use the GUID strategy, which will issue requests against an Azure Table in parallel assuming the PartitionKey stores GUID values. It doesn’t mean that your PartitionKey must have GUIDs however for this strategy to work; but the backup algorithm is tuned for this condition. Other options are available as well, such as filtering which columns, entities or tables are being backed up. Check out more on the Blue Syntax website at http://www.bluesyntax.net.

    Read the article

  • New TPerlRegEx Compatible with Delphi XE

    - by Jan Goyvaerts
    The new RegularExpressionsCore unit in Delphi XE is based on the PerlRegEx unit that I wrote many years ago. Since I donated full rights to a copy rather than full rights to the original, I can continue to make my version of TPerlRegEx available to people using older versions of Delphi. I did make a few changes to the code to modernize it a bit prior to donating a copy to Embarcadero. The latest TPerlRegEx includes those changes. This allows you to use the same regex-based code using the RegularExpressionsCore unit in Delphi XE, and the PerlRegEx unit in Delphi 2010 and earlier. If you’re writing new code using regular expressions in Delphi 2010 or earlier, I strongly recomment you use the new version of my PerlRegEx unit. If you later migrate your code to Delphi XE, all you have to do is replace PerlRegEx with RegularExrpessionsCore in the uses clause of your units. If you have code written using an older version of TPerlRegEx that you want to migrate to the latest TPerlRegEx, you’ll need to take a few changes into account. The original TPerlRegEx was developed when Borland’s goal was to have a component for everything on the component palette. So the old TPerlRegEx derives from TComponent, allowing you to put it on the component palette and drop it on a form. The new TPerlRegEx derives from TObject. It can only be instantiated at runtime. If you want to migrate from an older version of TPerlRegEx to the latest TPerlRegEx, start with removing any TPerlRegEx components you may have placed on forms or data modules and instantiate the objects at runtime instead. When instantiating at runtime, you no longer need to pass an owner component to the Create() constructor. Simply remove the parameter. Some of the property and method names in the original TPerlRegEx were a bit unwieldy. These have been renamed in the latest TPerlRegEx. Essentially, in all identifiers SubExpression was replaced with Group and MatchedExpression was replaced with Matched. Here is a complete list of the changed identifiers: Old Identifier New Identifier StoreSubExpression StoreGroups NamedSubExpression NamedGroup MatchedExpression MatchedText MatchedExpressionLength MatchedLength MatchedExpressionOffset MatchedOffset SubExpressionCount GroupCount SubExpressions Groups SubExpressionLengths GroupLengths SubExpressionOffsets GroupOffsets Download TPerlRegEx. Source is included under the MPL 1.1 license.

    Read the article

  • Choose Custom New Tab Pages in Chrome

    - by Asian Angel
    For most people the default New Tab Page in Chrome works perfectly well for their purposes. But if you would prefer to choose what opens in a new tab for yourself then you will definitely want to have a look at the “Define your own new tab!” extension for Google Chrome. Before Unless you are using a Speed Dial (or similar) extension each time you click on the “New Tab Button” you get the same old page. It would certainly be a lot more satisfying if you could choose custom webpage(s) to open as new tabs. After Once you have the extension installed the best thing to do is click on the “New Tab Button”. That will open up the “Options Page” where you can enter one or two custom website URLs of your choosing. Once you have your custom URLs entered click on “Save”. As soon as you click on the “New Tab Button” your new custom webpage(s) will open. If you chose two webpages the first choice will open focused on the “right side” instead of the “left”. Clicking on the “Home Button” will also open the webpage(s) that you chose. The webpage(s) that you chose will also open as your starting “Home Pages” each time that you start your browser. Conclusion If you have wanted to choose your own custom “New Tab Page” then this is the extension that you have been waiting for. Links Download the Define your own new tab! extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Find Similar Websites in Google ChromeAccess Google Chrome’s Special Pages the Easy WayEnable Vista Black Style Theme for Google Chrome in XPSet Custom Reload Times for Individual Webpages in ChromeEnable Auto-Paging Goodness in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Play Music in Chrome by Simply Dragging a File 15 Great Illustrations by Chow Hon Lam Easily Sync Files & Folders with Friends & Family Amazon Free Kindle for PC Download Stretch popurls.com with a Stylish Script (Firefox) OldTvShows.org – Find episodes of Hitchcock, Soaps, Game Shows and more

    Read the article

  • Please Stop Voting Against a Candidate

    - by Brian Lanham
    DISCLAIMER:  This is not a post about “Romney” or “Obama”.  This is not a post for whom I am voting.  This is simply a post to address an issue that I cannot ignore any longer.  This two-party system that we have allowed to establish a foothold is killing this country.    More than 2 Options I was recently asked, “If you had to choose Romney or Obama who would you pick?”  I replied “Non sequiter.  The founders of this nation ensured that I never have to pick from only two candidates.”  But somehow that is the way this country’s citizens think.  I told someone last week that there are around 20 candidates for president and she was genuinely surprised.  (There are actually 25 candidates.)  She had no idea there were that many and, even though she knew there are more, she didn’t know any names beyond Romney and Obama.  Well, I am going to try and educate people like her on other options. Vote for a Candidate, not against another Candidate So this post is the first in a series with a little bit of information about each candidate for president.  I implore you…I beg you, please do your civic duty and conduct a little bit of investigation and research on your own to find the right candidate for you.  Hey, if your candidate is Romney or Obama, that’s fine.  As long as it’s an educated decision.  But please…stop voting against a candidate.  Start voting for a candidate. A List of CandidatesAs I mentioned, I am going to write a little something about each candidate and I’m going to go by alphabetical order by PARTY, then by CANDIDATE LAST NAME so as to not show any bias. P.S. – If you want to know the candidate I selected I am happy to tell you.  But that’s not what this series is about.PARTYCANDIDATEAmerica's Party   Tom HoeflingAmerican Third Position PartyMerlin MillerAmericans Elect PartyNo candidates met the requirement to enter into the online caucus.Constitution PartyVirgil GoodeDemocratic Party   Barack ObamaGrassroots Party   Jim CarlsonGreen Party   Jill SteinIndependent American Party   Will ChristensenJustice PartyRocky AndersonLibertarian Party   Gary JohnsonObjectivist PartyTom StevensPeace and Freedom Party   Roseanne BarrReform PartyAndre BarnettRepublican PartyMitt RomneySocialism and Liberation PartyPeta LindsaySocialist Equality PartyJerry WhiteSocialist Party USAStewart AlexanderSocialist Workers PartyJames HarrisIndependent Candidates Jeff BossRichard DuncanJerry Litzel Dean Morstad Jill Reed Randall TerrySheila Tittle Michael Vargo

    Read the article

  • Creating a Strong Bridge to the Post PC World

    - by Webgui
    Moving from location to location requires strong roads.  When crossing a barrier though, like a body of water or valley, we are required to build a strong bridge to get us from point A to point B in a way that is fast, safe, and easy.Yet we are not talking here about driving a car or riding a bus.  As we in the computing world are evidencing the move to the post-PC era, modernizing and migrating legacy applications to harness the power of HTML5 web, cloud and mobile is one of the most difficult challenges enterprises have faced.  Constant technological changes have weakened the business value of legacy systems, which have been developed over the years through huge investments.  There are several risks of course in this move.  Do you choose to simply rewrite code of legacy apps and transform them to HTML5 one by one?  This is quite expensive (according to research firm Gartner, the cost is $6 - $26 per line of code).  Of course, the pace of the rewriting process is very slow – around 170 lines per day for each developer – which slows down business productivity in a world in which no organization can afford to fall behind.  Other questions include whether the new cloud-based apps will have the same functionality as the trusted applications that worked for you for years.  How will the user experience be affected?  And of course, what about data security?  So we are faced with the challenge of building a sturdy bridge to stabilize our move in order to allow us to confidently and easily move our legacy applications into the post-PC era.   We at Gizmox are excited to release the first downloadable Community Technology Preview (CTP) of our Instant CloudMove Transposition Studio.Developers: To download the tool, and try it out for yourself, please visit http://www.visualwebgui.com/download.aspx.The CTP is the first and only tool-based solution allowing any Microsoft Visual Studio developer to extend VB6 and .NET enterprise client/server applications into HTML5 web, cloud and mobile applications, including the ability to upgrade their code and UI while doing so.   It is the only solution to fully replicate enterprise desktop applications behavior in the post-PC era.  With Instant CloudMove, the transposed application is available on any mobile or tablet device, browser and across any client operating system. Moreover, the extended application logic and data remains on the server behind the fire-wall and therefore the application’s front end is secured-by-design.   We would love for you to try out the tool for yourselves and let us know what you think.  How are you finding the move?

    Read the article

  • Adjusting the Score on Oracle Text search results

    - by Kyle Hatlestad
    When you sort the results of a search by Score using OracleTextSearch as the search engine in WebCenter Content, the results coming back are based on the relevancy on the document.  In theory, the more relevant the search term is to the document, the higher ranked Score it should receive.  But in practice, the relevancy score can seem somewhat of a mystery.  It's not entirely clear how it ranks the importance of some documents over others based on the search term.  And often times, once a word appears a certain number of times within a document, the Score simply maxes out at 100 and the top results can be difficult to discern from one another.  Take for example the search for 'vacation' on this set of documents: Out of 7 results, 6 of them have a Score of '100' which means they are basically ranked the same.  This doesn't make the sort by Score very meaningful.   Besides sorting by relevance, you can also tell Oracle Text to sort by occurrence.  In that case, it is a much more predictable result in how they would be ranked. And for many cases provide a more meaningful sorting of results then relevance. To change this takes a small component change to the SearchOperatorMap resource.  By default, the query used for full-text searching looks like: <td>(ORACLETEXTSEARCH)fullText</td> <td>DEFINESCORE((%V), RELEVANCE * .1)</td> <td>text</td> Overriding this resource and changing it to: <td>(ORACLETEXTSEARCH)fullText</td> <td>DEFINESCORE((%V), OCCURRENCE * .01)</td>  <td>text</td> will force it to now use occurrence (note the change in scale to .01 as well).  So running the same search and sort options as the example above, the results come out quite a bit differently: In this case, there is a clear understanding of how the items rank.   And generally, if the search term appears 3 times more in one document then another, it's got a better chance of being a document I'm interested in.  You may or may not feel the relevance ranking is better then the search term occurrence, but this provides the opportunity to try an alternate method that might work better for your results.  A pre-built component is available for download here. There is one caveat in using this method.  The occurrence ranking also maxes out at 100, so if a search term is in the document more then that, the Score result will stay at 100.

    Read the article

< Previous Page | 272 273 274 275 276 277 278 279 280 281 282 283  | Next Page >