Search Results

Search found 21331 results on 854 pages for 'require once'.

Page 420/854 | < Previous Page | 416 417 418 419 420 421 422 423 424 425 426 427  | Next Page >

  • Bulk Rename Tool is a Lightweight but Powerful File Renaming Tool

    - by Jason Fitzpatrick
    There’s no need to settle for overly simplistic file renaming tools as long as Bulk Rename Tool is around. It’s lightweight, insanely customizable, portable, and sure to make short work of any renaming task you throw at it. Bulk Rename Tool is a great portable application (available as an installed version if you crave context menu integration) that blasts through file renaming tasks. The main panel is intimidatingly packed with toggles and variables you can alter; this isn’t a one-click solution by any means. That said, once you get comfortable using the interface it’s lightening fast and extremely flexible. One tip that will save you an enormous amount of frustrating when you get started: make sure to highlight the files you want to change in the file preview window (located in the upper right corner) or else you won’t see the preview and won’t know if the changes you’re making in the control panel are yielding the file names you desire. Hit up the link below to read more and grab a copy; Bulk Rename Tool is free, Windows only. Bulk Rename Tool Latest Features How-To Geek ETC How To Make Disposable Sleeves for Your In-Ear Monitors Macs Don’t Make You Creative! So Why Do Artists Really Love Apple? MacX DVD Ripper Pro is Free for How-To Geek Readers (Time Limited!) HTG Explains: What’s a Solid State Drive and What Do I Need to Know? How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Bring the Grid to Your Desktop with the TRON Legacy Theme for Windows 7 The Dark Knight and Team Fortress 2 Mashup Movie Trailer [Video] Dirt Cheap DSLR Viewfinder Improves Outdoor DSLR LCD Visibility Lakeside Sunset in the Mountains [Wallpaper] Taskbar Meters Turn Your Taskbar into a System Resource Monitor Create Shortcuts for Your Favorite or Most Used Folders in Ubuntu

    Read the article

  • Why is DAVExplorer not connecting?

    - by C.W.Holeman II
    DAVExplorer is not connecting. Connecting to a WebDAV Server states: Once you have entered a location URL, and (if necessary) your login name and password, DAV Explorer will connect to the remote WebDAV server, and request a listing of the resources there. A hierarchical view of the sub-collections will be displayed Invoke Apache Jackrabbit $ java -jar jackrabbit-standalone-2.0.0.jar --port 8200 Welcome to Apache Jackrabbit! ------------------------------- Using repository directory jackrabbit Writing log messages to jackrabbit/log Starting the server... Apache Jackrabbit is now running at http://localhost:8200/ Use DAVExplorer $ java -jar DAVExplorer.jar Then connect to localhost:8200/repository/default/ which pops up: Login ===== Login name: [admin] Password: [admin] <OK> The pop up closes then nothing changes. Using cadaver confirms Jackrabbit is working: $ cadaver http://localhost:8200/repository/default/ Authentication required for Jackrabbit Webdav Server on server `localhost': Username: admin Password: dav:/repository/default/> ls Listing collection `/repository/default/': succeeded. Coll: com 0 Mar 13 11:07 Coll: it 0 Mar 13 11:07 Coll: net 0 Mar 13 11:07 Coll: org 0 Mar 13 11:07 Coll: za 0 Mar 13 11:07

    Read the article

  • Debugging Node.js applications for Windows Azure

    - by cibrax
    In case you are developing a new web application with Node.js for Windows Azure, you might notice there is no easy way to debug the application unless you are developing in an integrated IDE like Cloud9. For those that develop applications locally using a text editor (or WebMatrix) and Windows Azure Powershell for Node.js, it requires some steps not documented anywhere for the moment. I spent a few hours on this the other day I practically got nowhere until I received some help from Tomek and the rest of them. The IISNode version that currently ships with the Windows Azure for Node.js SDK does not support debugging by default, so you need to install the IISNode full version available in the github repository.  Once you have installed the full version, you need to enable debugging for the web application by modifying the web.config file <iisnode debuggingEnabled="true" loggingEnabled="true" devErrorsEnabled="true" /> The xml above needs to be inserted within the existing “<system.webServer/>” section. The last step is to open a WebKit browser (e.g. Chrome) and navigate to the URL where your application is hosted but adding the segment “/debug” to  the end. The full URL to the node.js application must be used, for example, http://localhost:81/myserver.js/debug That should open a new instance of Node inspector on the browser, so you can debug the application from there. Enjoy!!

    Read the article

  • SQL SERVER – Excel Losing Decimal Values When Value Pasted from SSMS ResultSet

    - by pinaldave
    No! It is not a SQL Server Issue or SSMS issue. It is how things work. There is a simple trick to resolve this issue. It is very common when users are coping the resultset to Excel, the floating point or decimals are missed. The solution is very much simple and it requires a small adjustment in the Excel. By default Excel is very smart and when it detects the value which is getting pasted is numeric it changes the column format to accommodate that. Now as Zero which are training any digit after decimal points have no value, Excel automatically hides it. To prevent this to happen user has to convert columns to text format so it can preserve the formatting. Here is how you can do it. Select the corner between A and 1 and Right Click on it. It will select complete spreadsheet. If you want to change the format of any column you can select an individual column the same way. In the menu Click on Format Cells… It will bring up the following menu. Here by default the selected column will be General, change that to Text. It will change the format of all the cells to Text. Now once again paste the values from SSMS to the Excel. This time it will preserve the decimal values from SSMS. Solved! Any other trick you do you know to preserve the decimal values? Leave a comment please. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology Tagged: Excel

    Read the article

  • Change Windows Server 2012 color scheme without Desktop Experience feature

    - by Fez Vrasta
    I have a Windows Server 2012, blue is nice... but I'd prefer a less "eyes puncher" color, maybe gray or black... I'm a GNU/Linux sysadmin and just the fact of have the entire GUI on a server is difficult for me, so I would avoid to install the Desktop Experience feature just to change the color of the GUI. I have read here: How to change color scheme in Windows Server 2012 That once I've changed color I may remove the Desktop Experience feature and the color will not be reverted to the original. So I guess there must be a way to change the color without install this feature pack, because looks like it just adds the control panel to set the color, but not the core feature, that maybe could be accessible within some registry key. Does someone have some idea?

    Read the article

  • Java 6.0 Virtual Machine re-caches application on every load

    - by David Neale
    I have a Java application which is loaded and cached by the JRE and for most users it only needs to cache once unless the application software has changed. However, I have one computer that caches the entire application every time they load it. It is not the version of the JRE, I have that running on other machines. It also works on this machine if logged in as a local admin, just not as a standard user. Does anybody have any ideas on what might be causing this?

    Read the article

  • Windows Phone 7 Silverlight / XNA development talk

    - by subodhnpushpak
    Hi, I presented on Windows Phone 7 app development using Silverlight. Here are few pics from the event Windows Phone 7 development VIEW SLIDE SHOW DOWNLOAD ALL     I demonstrated the Visual studio, emulator capabilities/ features. An demo on Wp7 app communication with an OData Service, along with a demo on XNA app. There was lot of curious questions; I am listing them here because these keep on popping up again and again: 1. What tools does it takes to develop Wp7 app? Are they free? A typical WP7 app can be developed either using Silverlight or XNA. For developers, Visual Studio 2010 is a good choice as it provides an integrated development environment with lots of useful project templates; which makes the task really easy. For designers, Blend may be used to develop the UI in XAML. Both the tools are FREE (express version) to download and very intuitive to use. 2. What about the learning curve? If you know C#, (or any other programming language), learning curve is really flat. XAML (used for UI) may be new for you, but trust me; its very intuitive. Also you can use Microsoft Blend to generate the UI (XAML) for you. 3. How can I develop /test app without using actual device? How can I be sure my app runs as expected on actual device? The WP7 SDK comes along with an excellent emulator; which you can use for development/ testing on a computer. Later you can just change a setting and deploy the application on WP7. You will require Zune software for deploying the application on phone along with Developers key from WP7 marketplace. You can obtain key from marketplace by filling a form. The whole process for registering  is easy; just follow the steps on the site. 4. Which one should I use? Silverlight or XNA? Use Silverlight for enterprise/ business / utility apps. Use XNA for Games app. While each platform is capable / strong and may be used in conjunction as well; The methodologies used for development in these platforms are very different. XNA works on typical Do..While loop where as Silverlight works on event based methodology. 5. Where are the learning resources? Are they free? There is lots of stuff on WP7. Most of them are free. There is a excellent free book by Charles Petzold to download and http://www.microsoft.com/windowsphone is full of demos /todos / vidoes. All the exciting stuff was captured live and you can view it here; in case you were not able to catch it live!! @ http://livestre.am/AUfx. My talk starts from 3:19:00 timeline in the video!! Is there an app you miss on WP7? Do let me know about it and I may work on it for free !!! Keep discovering. Keep is Simple. WP7. Subodh

    Read the article

  • Key ATG architecture principles

    - by Glen Borkowski
    Overview The purpose of this article is to describe some of the important foundational concepts of ATG.  This is not intended to cover all areas of the ATG platform, just the most important subset - the ones that allow ATG to be extremely flexible, configurable, high performance, etc.  For more information on these topics, please see the online product manuals. Modules The first concept is called the 'ATG Module'.  Simply put, you can think of modules as the building blocks for ATG applications.  The ATG development team builds the out of the box product using modules (these are the 'out of the box' modules).  Then, when a customer is implementing their site, they build their own modules that sit 'on top' of the out of the box ATG modules.  Modules can be very simple - containing minimal definition, and perhaps a small amount of configuration.  Alternatively, a module can be rather complex - containing custom logic, database schema definitions, configuration, one or more web applications, etc.  Modules generally will have dependencies on other modules (the modules beneath it).  For example, the Commerce Reference Store module (CRS) requires the DCS (out of the box commerce) module. Modules have a ton of value because they provide a way to decouple a customers implementation from the out of the box ATG modules.  This allows for a much easier job when it comes time to upgrade the ATG platform.  Modules are also a very useful way to group functionality into a single package which can be leveraged across multiple ATG applications. One very important thing to understand about modules, or more accurately, ATG as a whole, is that when you start ATG, you tell it what module(s) you want to start.  One of the first things ATG does is to look through all the modules you specified, and for each one, determine a list of modules that are also required to start (based on each modules dependencies).  Once this final, ordered list is determined, ATG continues to boot up.  One of the outputs from the ordered list of modules is that each module can contain it's own classes and configuration.  During boot, the ordered list of modules drives the unified classpath and configpath.  This is what determines which classes override others, and which configuration overrides other configuration.  Think of it as a layered approach. The structure of a module is well defined.  It simply looks like a folder in a filesystem that has certain other folders and files within it.  Here is a list of items that can appear in a module: MyModule: META-INF - this is required, along with a file called MANIFEST.MF which describes certain properties of the module.  One important property is what other modules this module depends on. config - this is typically present in most modules.  It defines a tree structure (folders containing properties files, XML, etc) that maps to ATG components (these are described below). lib - this contains the classes (typically in jarred format) for any code defined in this module j2ee - this is where any web-apps would be stored. src - in case you want to include the source code for this module, it's standard practice to put it here sql - if your module requires any additions to the database schema, you should place that schema here Here's a screenshots of a module: Modules can also contain sub-modules.  A dot-notation is used when referring to these sub-modules (i.e. MyModule.Versioned, where Versioned is a sub-module of MyModule). Finally, it is important to completely understand how modules work if you are going to be able to leverage them effectively.  There are many different ways to design modules you want to create, some approaches are better than others, especially if you plan to share functionality between multiple different ATG applications. Components A component in ATG can be thought of as a single item that performs a certain set of related tasks.  An example could be a ProductViews component - used to store information about what products the current customer has viewed.  Components have properties (also called attributes).  The ProductViews component could have properties like lastProductViewed (stores the ID of the last product viewed) or productViewList (stores the ID's of products viewed in order of their being viewed).  The previous examples of component properties would typically also offer get and set methods used to retrieve and store the property values.  Components typically will also offer other types of useful methods aside from get and set.  In the ProductViewed component, we might want to offer a hasViewed method which will tell you if the customer has viewed a certain product or not. Components are organized in a tree like hierarchy called 'nucleus'.  Nucleus is used to locate and instantiate ATG Components.  So, when you create a new ATG component, it will be able to be found 'within' nucleus.  Nucleus allows ATG components to reference one another - this is how components are strung together to perform meaningful work.  It's also a mechanism to prevent redundant configuration - define it once and refer to it from everywhere. Here is a screenshot of a component in nucleus:  Components can be extremely simple (i.e. a single property with a get method), or can be rather complex offering many properties and methods.  To be an ATG component, a few things are required: a class - you can reference an existing out of the box class or you could write your own a properties file - this is used to define your component the above items must be located 'within' nucleus by placing them in the correct spot in your module's config folder Within the properties file, you will need to point to the class you want to use: $class=com.mycompany.myclass You may also want to define the scope of the class (request, session, or global): $scope=session In summary, ATG Components live in nucleus, generally have links to other components, and provide some meaningful type of work.  You can configure components as well as extend their functionality by writing code. Repositories Repositories (a.k.a. Data Anywhere Architecture) is the mechanism that ATG uses to access data primarily stored in relational databases, but also LDAP or other backend systems.  ATG applications are required to be very high performance, and data access is critical in that if not handled properly, it could create a bottleneck.  ATG's repository functionality has been around for a long time - it's proven to be extremely scalable.  Developers new to ATG need to understand how repositories work as this is a critical aspect of the ATG architecture.   Repositories essentially map relational tables to objects in ATG, as well as handle caching.  ATG defines many repositories out of the box (i.e. user profile, catalog, orders, etc), and this is comprised of both the underlying database schema along with the associated repository definition files (XML).  It is fully expected that implementations will extend / change the out of the box repository definitions, so there is a prescribed approach to doing this.  The first thing to be sure of is to encapsulate your repository definition additions / changes within your own module (as described above).  The other important best practice is to never modify the out of the box schema - in other words, don't add columns to existing ATG tables, just create your own new tables.  These will help ensure you can easily upgrade your application at a later date. xml-combination As mentioned earlier, when you start ATG, the order of the modules will determine the final configpath.  Files within this configpath are 'layered' such that modules on top can override configuration of modules below it.  This is the same concept for repository definition files.  If you want to add a few properties to the out of the box user profile, you simply need to create an XML file containing only your additions, and place it in the correct location in your module.  At boot time, your definition will be combined (hence the term xml-combination) with the lower, out of the box modules, with the result being a user profile that contains everything (out of the box, plus your additions).  Aside from just adding properties, there are also ways to remove and change properties. types of properties Aside from the normal 'database backed' properties, there are a few other interesting types: transient properties - these are properties that are in memory, but not backed by any database column.  These are useful for temporary storage. java-backed properties - by nature, these are transient, but in addition, when you access this property (by called the get method) instead of looking up a piece of data, it performs some logic and returns the results.  'Age' is a good example - if you're storing a birth date on the profile, but your business rules are defined in terms of someones age, you could create a simple java-backed property to look at the birth date and compare it to the current date, and return the persons age. derived properties - this is what allows for inheritance within the repository structure.  You could define a property at the category level, and have the product inherit it's value as well as override it.  This is useful for setting defaults, with the ability to override. caching There are a number of different caching modes which are useful at different times depending on the nature of the data being cached.  For example, the simple cache mode is useful for things like user profiles.  This is because the user profile will typically only be used on a single instance of ATG at one time.  Simple cache mode is also useful for read-only types of data such as the product catalog.  Locked cache mode is useful when you need to ensure that only one ATG instance writes to a particular item at a time - an example would be a customers order.  There are many options in terms of configuring caching which are outside the scope of this article - please refer to the product manuals for more details. Other important concepts - out of scope for this article There are a whole host of concepts that are very important pieces to the ATG platform, but are out of scope for this article.  Here's a brief description of some of them: formhandlers - these are ATG components that handle form submissions by users. pipelines - these are configurable chains of logic that are used for things like handling a request (request pipeline) or checking out an order. special kinds of repositories (versioned, files, secure, ...) - there are a couple different types of repositories that are used in various situations.  See the manuals for more information. web development - JSP/ DSP tag library - ATG provides a traditional approach to developing web applications by providing a tag library called the DSP library.  This library is used throughout your JSP pages to interact with all the ATG components. messaging - a message sub-system used as another way for components to interact. personalization - ability for business users to define a personalized user experience for customers.  See the other blog posts related to personalization.

    Read the article

  • How can I share a Windows 7 library with anonymous access without using a Homegroup?

    - by Triynko
    The "homegroup" feature is useless, because it requires a password, and therefore doesn't support anonymous, no-hassle access to shares from devices such as my Sony Bravia TV and non-Windows7 machines. So I turned off homegroup and reverted to the standard shared folders protocol. I'd like to share my Music "Library", so I can play files from my TV through the Surround Sound System, but there seems to be no option to share a library folder other than through a homegroup. I don't want to have to individually share the folders that belong to the library, because that would defeat the purposes of the library, which is to manage which folders are included in the library while also providing an easily accessible view of them all at once. Does anyone know how to share a Windows 7 library without that useless homegroup feature?

    Read the article

  • The Endeca UI Design Pattern Library Returns

    - by Joe Lamantia
    I'm happy to announce that the Endeca UI Design Pattern Library - now titled the Endeca Discovery Pattern Library - is once again providing guidance and good practices on the design of discovery experiences.  Launched publicly in 2010 following several years of internal development and usage, the Endeca Pattern Library is a unique and valued source of industry-leading perspective on discovery - something I've come to appreciate directly through  fielding the consistent stream of inquiries about the library's status, and requests for its rapid return to public availability. Restoring the library as a public resource is only the first step!  For the next stage of the library's evolution, we plan to increase the scope of the guidance it offers beyond user interface design to the broader topic of discovery.  This could include patterns for architecture at the systems, user experience, and business levels; information and process models; analytical method and activity patterns for conducting discovery; and organizational and resource patterns for provisioning discovery capability in different settings.  We'd like guidance from the community on the kinds of patterns that are most valuable - so make sure to let us know. And we're also considering ways to increase the number of patterns the library offers, possibly by expanding the set of contributors and the authoring mechanisms. If you'd like to contribute, please get in touch. Here's the new address of the library: http://www.oracle.com/goto/EndecaDiscoveryPatterns And I should say 'Many thanks' to the UXDirect team and all the others within the Oracle family who helped - literally - keep the library alive, and restore it as a public resource.

    Read the article

  • IBM X3200 M2 upgrade and VT-x / usb passthrough for esxi

    - by enzo
    Hi, I have an IBM server x3200 m2 4368-k1g. Only to understand, simply because on IBM support does not exist no more upgrades kit on sale... if I buy a xeon processor, for example x3330 (4core @ 2.6ghz), and put it on my motherboard... 1) it works? 2) on the bios (where now there is no trace about...) will compare a VT-x menu to select and enable? 3) with esxi, once the VT-x is on... I'll be able to have usb passthrough ? Thank you in advance for your reply

    Read the article

  • Is it possible to retrieve lost Windows XP profile information?

    - by Simon Shaw
    I'm running Windows XP Professional on a PC which is less than a year old. About 3 months ago I got an error on boot saying windows could not find my profile. At this point it created a temp one. On this occasion I copied the temp data back into my existing profile and carried on using it. After another month it died again and, once again set me up a temp user. Each time this happens I lose everything related to my profile (desktop, start menu, etc). I ran checks on the hard disk and these suggested nothing was wrong so this seems a software issue not a disk problem. On the advice of my computer manufacturer I am now considering reinstalling Windows but just wanted to check if there's any other ideas for things I might be able to try. Any suggestions for what could be causing the problem would also be gladly appreciated.

    Read the article

  • How to convert series of MP3 to a M4B in a batch

    - by Artem Tikhomirov
    I have a batch of MP3 based books. Some of them divide into files according to book's own structure: chapters and so on. Some of them was just divided into equally lengthened parts. So. I've bought an iPhone, and I want to convert them all to M4B format. How could I convert them in a batch? I mean how cold I set up a process once, for each book, and then, after couple of weeks, receive totally converted library. The only able program for such conversion I've found was Audiobook Builder for a Mac. But it is pretty slow and do not support batching in principle. Solutions for any platform, please.

    Read the article

  • Is there a way to legally create a game mod?

    - by Rodrigo Guedes
    Some questions about it: If I create a funny version of a copyrighted game and sell it (crediting the original developers) would it be considered a parody or would I need to pay royalties? If I create a game mod for my own personal use would it be legal? What if I gave it for free to a friend? Is there a general rule about it or it depends on the developer will? P.S.: I'm not talking about cloning games like this question. It's all about a game clearly based on another. Something like "GTA Gotham City" ;) EDIT: This picture that I found over the internet illustrate what I'm talking about: Just in case I was not clear: I never created a mod game. I was just wondering if it would be legally possible before trying to do it. I'm not apologizing piracy. I pay dearly for my games (you guys have no idea how expensive games are in Brazil due to taxes). Once more I say that the question is not about cloning. Cloning is copy something and try to make your version look like a brand new product. Mods are intended to make reference to one or more of its source. I'm not sure if it can be done legally (if I knew I wasn't asking) but I'm sure this question is not a duplicate. Even so, I trust in the moderators and if they close my question I will not be offended - at least I had an opportunity to explain myself and got 1 good answer (by the time I write this, maybe some more will be given later).

    Read the article

  • Clonezilla-SE with another DHCP Server in LAN

    - by aleroot
    I want to install Clonezilla-Server(192.168.1.100) in a network that already have a DHCP Server (dd-wrt with dnsmasq - 192.168.1.1). I've installed Clonezilla-SE on ubuntu Server 10.10, once installed and configured Clonezilla Server i've removed the DHCP-Server and set pxe server address in dnsmasq configuration on DHCP Server : dhcp-boot=pxelinux.0,,192.168.1.100 When i try to start from PXE a Computer in the network clonezilla start, but give me an error that the ipddress of the machine is not given by the clonezilla server and can't continue ... Someone has already tried to configure Clonezilla-SE in a similar enviroment? Is there some configuration on DRBL server of Clonezilla that i need to do ?

    Read the article

  • What is your strategy for converting RC builds into retail?

    - by Matthew PK
    We're trying to implement a strategy for how we transition our builds from RC to released retail code. When we label a build as a release candidate, we send it to QA for regression. If they approve it, that RC then becomes our released retail code. I liked the idea of "obvious" labeling of versions so that a user knows whether they have a beta or an RC or retail code... where you would have some obvious watermark in non-retail code (think Windows 7 where the RC or non-genuine builds watermark in the bottom right). ... but it seemed strange to us to manipulate the project (to remove the watermark) once it passed regression. If QA certified version a.b.c.d then our retail code should be that same version, not a.b.c.d+1 what strategies have you employed to clearly label non-release software versions without incrementing your build to disable the watermarks in your retail code? One idea I've considered is writing your build to look for a signed file in the installer archive... non-release code wouldn't include this file and so the app would know to display a watermark. But even this seems like QA is then working with non-release code. Ideas?

    Read the article

  • Anyone tried dd'ing Raidmembers?

    - by DusteD
    I want replace all disks in a 10 disk raid6 (linux software raid). I could do this by pulling a disk, let the array rebuild, rinse, repeat. But this would take a very long time, and cause 10 rebuilds, which would most likely stress all 10 disks much more than simply reading each disk through once. My question is thus: Could I just shut down the array, and dd each old disk to a new disk and then start the array with the 10 new disks? In an ideal world, I would build another server and just copy the data via network, but this is not an ideal world.

    Read the article

  • Find files containing a string on the whole filesystem

    - by Fabio
    I need to find all the instances of a given string in the whole filesystem, because I don't remember in which configuration files, script or any other programs I put it and I need to update that string with a new one. I tried with the following command `grep -nr 'needle' / --exclude-dir=.svn | mail [email protected] -s 'References on xxx' If I run this command on a small directory it gives me the output I need in the form /path1/:nn:line containing needle /path2/:nn:line containing needle where /path1 is the full path of the file, nn is the row containing the needle and last field is the content of the line. However when I run the command on the root directory the grep process hang after a while. I run this script about 8 hours ago and even on a small filesystem (less than 5GB) it doesn't end and if I run top or ps the process seems sleeping root 24909 0.0 0.1 3772 1520 pts/1 S+ Feb10 0:15 grep -nr needle / --exclude-dir=.svn Why it doesn't end? Is there any better way to do this (it's a one time job, I don't need to execute this more than once) Thanks.

    Read the article

  • Saving music wisely: Why save 'Queen - Bohemian Rhapsody.mp3' millions of times?

    - by hsmit
    As far as I'm concerned, Queen's song 'bohemian rhapsody' is one of the most popular songs all time. But for the purpose of this message you may replace this with another track. At the same time I think 60% of the digital-music listeners have this track. Sometimes we have multiple copies: different versions of the track, different devices, unwanted duplicates in download folders, itunes folders etc.. Wouldn't it be much smarter to store these songs only once? You can imagine various solutions for this. How would you accomplish this? Some criteria that may help you find an answer: It must reduce disk space It must remember which music belongs to you (DRM) It must use network traffic efficiently

    Read the article

  • Daily Blog Archives and Duplicate Content

    - by nemmy
    A few weeks back I realised that my blog software was creating daily post archives. Which basically resulted in duplicate content especially if I only had one post a day. The situation is something like this: www.sitename.com/blog/archives/2013/06/01 - daily archive for 1 June 2013 www.sitename.com/blog/archives/2013/06/my-post-name.html So, here we have two pages that are basically identical except the daily archive has some meaningless title like "Daily Archive for 1 June 2003". And I have no control over which content Google decides is the primary content. It's quite possible (and likely) that the daily archive could be the "primary" content and the actual post itself the "duplicate". Once I realised it was doing this I modified the daily archive template to include <meta name="robots" content="noindex"> Here we are a few weeks later and I still see some daily archives coming up in Google search results. I realise some of those deep pages might not be crawled yet but I am worried that the original post (which should be the PRIMARY content) has been marked duplicate content by Google. Now I've no indexed the daily archives I might end up with no indexed content AND the original articles still flagged as duplicates. And nothing will show up in search at all. Have I screwed myself here or is there a way out?

    Read the article

  • Drawing text from update method in XNA

    - by Sigh-AniDe
    I am having a problem drawing the "Game Over!" text once the user is on the last tile. This is what I have: The Update and drawText methods are in a class named turtle: public void Update(float scalingFactor, int[,] map, SpriteBatch batch, SpriteFont font) { if (isMovable(mapX, mapY - 1, map)) { position.Y = position.Y - (int)scalingFactor; angle = 0.0f; Program.form.direction = ""; if (mapX == 17 && mapY == 1)// This is the last tile(Tested) { Program.form.BackColor = System.Drawing.Color.Red; drawText(batch, font); } } } public void drawText(SpriteBatch spritebatch, SpriteFont spriteFont) { textPosition.X = 200; // a vector2 textPosition.Y = 200; spritebatch.Begin(SpriteSortMode.BackToFront, BlendState.AlphaBlend); spritebatch.DrawString(spriteFont, "Game Over!!!", textPosition, Color.Red); spritebatch.End(); } This update is in the Game1 class: protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); turtle.Update(scalingFactor, map, spriteBatch, font); base.Update(gameTime); } I have also added the font content to LoadContent: font = Content.Load<SpriteFont>("fontType"); What am I doing wrong? Why does the text not want to show on game completion? If I call the turtle.draw() in the main Draw method. The "Game Over" text stays on screen from the beggining. What am I missing? Thanks

    Read the article

  • EPM 11.1.2.2 Architecture: Financial Performance Management Applications

    - by Marc Schumacher
     Financial Management can be accessed either by a browser based client or by SmartView. Starting from release 11.1.2.2, the Financial Management Windows client does not longer access the Financial Management Consolidation server. All tasks that require an on line connection (e.g. load and extract tasks) can only be done using the web interface. Any client connection initiated by a browser or SmartView is send to the Oracle HTTP server (OHS) first. Based on the path given (e.g. hfmadf, hfmofficeprovider) in the URL, OHS makes a decision to forward this request either to the new Financial Management web application based on the Oracle Application Development Framework (ADF) or to the .NET based application serving SmartView retrievals running on Internet Information Server (IIS). Any requests send to the ADF web interface that need to be processed by the Financial Management application server are send to the IIS using HTTP protocol and will be forwarded further using DCOM to the Financial Management application server. SmartView requests, which are processes by IIS in first row, are forwarded to the Financial Management application server using DCOM as well. The Financial Management Application Server uses OLE DB database connections via native database clients to talk to the Financial Management database schema. Communication between the Financial Management DME Listener, which handles requests from EPMA, and the Financial Management application server is based on DCOM.  Unlike most other components Essbase Analytics Link (EAL) does not have an end user interface. The only user interface is a plug-in for the Essbase Administration Services console, which is used for administration purposes only. End users interact with a Transparent or Replicated Partition that is created in Essbase and populated with data by EAL. The Analytics Link Server deployed on WebLogic communicates through HTTP protocol with the Analytics Link Financial Management Connector that is deployed in IIS on the Financial Management web server. Analytics Link Server interacts with the Data Synchronisation server using the EAL API. The Data Synchronization server acts as a target of a Transparent or Replicated Partition in Essbase and uses a native database client to connect to the Financial Management database. Analytics Link Server uses JDBC to connect to relational repository databases and Essbase JAPI to connect to Essbase.  As most Oracle EPM System products, browser based clients and SmartView can be used to access Planning. The Java based Planning web application is deployed on WebLogic, which is configured behind an Oracle HTTP Server (OHS). Communication between Planning and the Planning RMI Registry Service is done using Java Remote Message Invocation (RMI). Planning uses JDBC to access relational repository databases and talks to Essbase using the CAPI. Be aware of the fact that beside the Planning System database a dedicated database schema is needed for each application that is set up within Planning.  As Planning, Profitability and Cost Management (HPCM) has a pretty simple architecture. Beside the browser based clients and SmartView, a web service consumer can be used as a client too. All clients access the Java based web application deployed on WebLogic through Oracle HHTP Server (OHS). Communication between Profitability and Cost Management and EPMA Web Server is done using HTTP protocol. JDBC is used to access the relational repository databases as well as data sources. Essbase JAPI is utilized to talk to Essbase.  For Strategic Finance, two clients exist, SmartView and a Windows client. While SmartView communicates through the web layer to the Strategic Finance Server, Strategic Finance Windows client makes a direct connection to the Strategic Finance Server using RPC calls. Connections from Strategic Finance Web as well as from Strategic Finance Web Services to the Strategic Finance Server are made using RPC calls too. The Strategic Finance Server uses its own file based data store. JDBC is used to connect to the EPM System Registry from web and application layer.  Disclosure Management has three kinds of clients. While the browser based client and SmartView interact with the Disclosure Management web application directly through Oracle HTTP Server (OHS), Taxonomy Designer does not connect to the Disclosure Management server. Communication to relational repository databases is done via JDBC, to connect to Essbase the Essbase JAPI is utilized.

    Read the article

  • How can I upgrade my server's kernel without rebooting?

    - by Oli
    This is a loaded question because I'm already aware of, and am very interested in ksplice. The problem is that since they were bought by Oracle, they have been forced to pull numerous server distributions from the offerings. The answer isn't as simple as it once was. I noticed a question on Unix.SE that states: You can build your own ksplice patches to dynamically load into your own kernel Great! But how?! I've installed the free ksplice package in the repo on my desktop (not ksplice-uptrack which is non-free) and now want to generate and apply updates. What's the process? Are there any scripts out there to automate the process? Moreover, if all the machinery required for rebootless upgrades is sitting there in the kernel (and ksplice package), why on earth aren't we taking advantage of it by default? Note 1: I am happy for a solution beside ksplice but it has to deliver the same thing: rolling updates to the kernel that can be applied without rebooting the server. Note 2: I'll say it again; the main ksplice "service" does not support Ubuntu Server. It used to but it doesn't any more. When I talk about wanting to use ksplice, I'm talking about the open source tools in the ksplice package. Any answer that talks about ksplice-uptrack is probably not what I'm after as this is the part that integrates directly with aforementioned "service".

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part II

    - by dbayard
    Part II – Solving Big Problems with Oracle R Enterprise In the first post in this series (see https://blogs.oracle.com/R/entry/solving_big_problems_with_oracle), we showed how you can use R to perform historical rate of return calculations against investment data sourced from a spreadsheet.  We demonstrated the calculations against sample data for a small set of accounts.  While this worked fine, in the real-world the problem is much bigger because the amount of data is much bigger.  So much bigger that our approach in the previous post won’t scale to meet the real-world needs. From our previous post, here are the challenges we need to conquer: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In this post, we will show how we moved from sample data environment to working with full-scale data.  This post is based on actual work we did for a financial services customer during a recent proof-of-concept. Getting started with the Database At this point, we have some sample data and our IRR function.  We were at a similar point in our customer proof-of-concept exercise- we had sample data but we did not have the full customer data yet.  So our database was empty.  But, this was easily rectified by leveraging the transparency features of Oracle R Enterprise (see https://blogs.oracle.com/R/entry/analyzing_big_data_using_the).  The following code shows how we took our sample data SimpleMWRRData and easily turned it into a new Oracle database table called IRR_DATA via ore.create().  The code also shows how we can access the database table IRR_DATA as if it was a normal R data.frame named IRR_DATA. If we go to sql*plus, we can also check out our new IRR_DATA table: At this point, we now have our sample data loaded in the database as a normal Oracle table called IRR_DATA.  So, we now proceeded to test our R function working with database data. As our first test, we retrieved the data from a single account from the IRR_DATA table, pull it into local R memory, then call our IRR function.  This worked.  No SQL coding required! Going from Crawling to Walking Now that we have shown using our R code with database-resident data for a single account, we wanted to experiment with doing this for multiple accounts.  In other words, we wanted to implement the split-apply-combine technique we discussed in our first post in this series.  Fortunately, Oracle R Enterprise provides a very scalable way to do this with a function called ore.groupApply().  You can read more about ore.groupApply() here: https://blogs.oracle.com/R/entry/analyzing_big_data_using_the1 Here is an example of how we ask ORE to take our IRR_DATA table in the database, split it by the ACCOUNT column, apply a function that calls our SimpleMWRR() calculation, and then combine the results. (If you are following along at home, be sure to have installed our myIRR package on your database server via  “R CMD INSTALL myIRR”). The interesting thing about ore.groupApply is that the calculation is not actually performed in my desktop R environment from which I am running.  What actually happens is that ore.groupApply uses the Oracle database to perform the work.  And the Oracle database is what actually splits the IRR_DATA table by ACCOUNT.  Then the Oracle database takes the data for each account and sends it to an embedded R engine running on the database server to apply our R function.  Then the Oracle database combines all the individual results from the calls to the R function. This is significant because now the embedded R engine only needs to deal with the data for a single account at a time.  Regardless of whether we have 20 accounts or 1 million accounts or more, the R engine that performs the calculation does not care.  Given that normal R has a finite amount of memory to hold data, the ore.groupApply approach overcomes the R memory scalability problem since we only need to fit the data from a single account in R memory (not all of the data for all of the accounts). Additionally, the IRR_DATA does not need to be sent from the database to my desktop R program.  Even though I am invoking ore.groupApply from my desktop R program, because the actual SimpleMWRR calculation is run by the embedded R engine on the database server, the IRR_DATA does not need to leave the database server- this is both a performance benefit because network transmission of large amounts of data take time and a security benefit because it is harder to protect private data once you start shipping around your intranet. Another benefit, which we will discuss in a few paragraphs, is the ability to leverage Oracle database parallelism to run these calculations for dozens of accounts at once. From Walking to Running ore.groupApply is rather nice, but it still has the drawback that I run this from a desktop R instance.  This is not ideal for integrating into typical operational processes like nightly data warehouse refreshes or monthly statement generation.  But, this is not an issue for ORE.  Oracle R Enterprise lets us run this from the database using regular SQL, which is easily integrated into standard operations.  That is extremely exciting and the way we actually did these calculations in the customer proof. As part of Oracle R Enterprise, it provides a SQL equivalent to ore.groupApply which it refers to as “rqGroupEval”.  To use rqGroupEval via SQL, there is a bit of simple setup needed.  Basically, the Oracle Database needs to know the structure of the input table and the grouping column, which we are able to define using the database’s pipeline table function mechanisms. Here is the setup script: At this point, our initial setup of rqGroupEval is done for the IRR_DATA table.  The next step is to define our R function to the database.  We do that via a call to ORE’s rqScriptCreate. Now we can test it.  The SQL you use to run rqGroupEval uses the Oracle database pipeline table function syntax.  The first argument to irr_dataGroupEval is a cursor defining our input.  You can add additional where clauses and subqueries to this cursor as appropriate.  The second argument is any additional inputs to the R function.  The third argument is the text of a dummy select statement.  The dummy select statement is used by the database to identify the columns and datatypes to expect the R function to return.  The fourth argument is the column of the input table to split/group by.  The final argument is the name of the R function as you defined it when you called rqScriptCreate(). The Real-World Results In our real customer proof-of-concept, we had more sophisticated calculation requirements than shown in this simplified blog example.  For instance, we had to perform the rate of return calculations for 5 separate time periods, so the R code was enhanced to do so.  In addition, some accounts needed a time-weighted rate of return to be calculated, so we extended our approach and added an R function to do that.  And finally, there were also a few more real-world data irregularities that we needed to account for, so we added logic to our R functions to deal with those exceptions.  For the full-scale customer test, we loaded the customer data onto a Half-Rack Exadata X2-2 Database Machine.  As our half-rack had 48 physical cores (and 96 threads if you consider hyperthreading), we wanted to take advantage of that CPU horsepower to speed up our calculations.  To do so with ORE, it is as simple as leveraging the Oracle Database Parallel Query features.  Let’s look at the SQL used in the customer proof: Notice that we use a parallel hint on the cursor that is the input to our rqGroupEval function.  That is all we need to do to enable Oracle to use parallel R engines. Here are a few screenshots of what this SQL looked like in the Real-Time SQL Monitor when we ran this during the proof of concept (hint: you might need to right-click on these images to be able to view the images full-screen to see the entire image): From the above, you can notice a few things (numbers 1 thru 5 below correspond with highlighted numbers on the images above.  You may need to right click on the above images and view the images full-screen to see the entire image): The SQL completed in 110 seconds (1.8minutes) We calculated rate of returns for 5 time periods for each of 911k accounts (the number of actual rows returned by the IRRSTAGEGROUPEVAL operation) We accessed 103m rows of detailed cash flow/market value data (the number of actual rows returned by the IRR_STAGE2 operation) We ran with 72 degrees of parallelism spread across 4 database servers Most of our 110seconds was spent in the “External Procedure call” event On average, we performed 8,200 executions of our R function per second (110s/911k accounts) On average, each execution was passed 110 rows of data (103m detail rows/911k accounts) On average, we did 41,000 single time period rate of return calculations per second (each of the 8,200 executions of our R function did rate of return calculations for 5 time periods) On average, we processed over 900,000 rows of database data in R per second (103m detail rows/110s) R + Oracle R Enterprise: Best of R + Best of Oracle Database This blog post series started by describing a real customer problem: how to perform a lot of calculations on a lot of data in a short period of time.  While standard R proved to be a very good fit for writing the necessary calculations, the challenge of working with a lot of data in a short period of time remained. This blog post series showed how Oracle R Enterprise enables R to be used in conjunction with the Oracle Database to overcome the data volume and performance issues (as well as simplifying the operations and security issues).  It also showed that we could calculate 5 time periods of rate of returns for almost a million individual accounts in less than 2 minutes. In a future post, we will take the same R function and show how Oracle R Connector for Hadoop can be used in the Hadoop world.  In that next post, instead of having our data in an Oracle database, our data will live in Hadoop and we will how to use the Oracle R Connector for Hadoop and other Oracle Big Data Connectors to move data between Hadoop, R, and the Oracle Database easily.

    Read the article

  • Computer becomes very slow (permanently) after running a bunch of apps

    - by djzmo
    Hello there, My computer with Windows XP installed becomes very slow after I ran some heavy tasks at a time. (play a full 3D online game while extracting a 4GB RAR archive) It freezes for about 200~500ms every few seconds, and this always happens if I do heavy tasks at once in my computer (for example, installing a program while playing games), and the lag remains permanently (even a reboot won't make it better) unless I repair-install the Windows. Since I have a low-end computer: Intel(R) Pentium(R) 4 CPU 2.00 GHz, 512 MB of RAM ATI RADEON 9550 AGP 256 MB And the only way I used everytime to fix this problem is by repair-installing my Windows XP. So that I won't lose any data or installed programs. But I believe there's a better and faster way to fix this without repair-installing the Windows. Any idea?

    Read the article

< Previous Page | 416 417 418 419 420 421 422 423 424 425 426 427  | Next Page >