Search Results

Search found 17470 results on 699 pages for 'single quote'.

Page 136/699 | < Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >

  • SharePoint 2010 and Windows Server Backup

    - by Enrique Lima
    A couple of months ago, a friend found a bit of information on TechNet that has proven to be quite useful. See, I am of the opinion SharePoint allows for smaller deployments to be made, and with that said, I am talking about SharePoint Foundation 2010 being used for the most part. But truly the point here is not to discuss whether or not a deployment of SharePoint Foundation 2010 or SharePoint Server 2010 is right or not.  The fact is they do take place and happen.  And information will reside there. Now, the point of this post is to raise awareness on options available for companies that have implemented it and maybe are a bit “iffy” on how to protect the information being placed in libraries and lists.  In many cases I have found SharePoint comes first and business continuity becomes an afterthought.  The documentation piece from TechNet states: “You can register SharePoint Server 2010 with Windows Server Backup by using the stsadm.exe -o -registerwsswriter operation to configure the Volume Shadow Copy Service (VSS) writer for SharePoint Server. Windows Server Backup then includes SharePoint Server 2010 in server-wide backups. When you restore from a Windows Server backup, you can select Microsoft SharePoint Foundation (no matter which version of SharePoint 2010 Products is installed), and all components reported by the VSS writer forSharePoint Server 2010 on that server at the time of the backup will be restored. Windows Server Backup is recommended only for use with for single-server deployments.” Even in the event of single-server deployments you will have options to safeguard your data. The process will require that after you have executed the stsadm command above, you will then use Windows Server Backup to do a Full Server Backup.  Then when the restore operation is needed you will be able to select specifically the section that has the SharePoint technologies backup. The restore process: Hope you find this to be a helpful post.  I have found this to be specially handy in SharePoint deployments that are part of a Team Foundation Server deployment and that are isolated from any other SharePoint farm and such.   Credits:  Sean McDonough for passing along the information available on TechNet.

    Read the article

  • Key ATG architecture principles

    - by Glen Borkowski
    Overview The purpose of this article is to describe some of the important foundational concepts of ATG.  This is not intended to cover all areas of the ATG platform, just the most important subset - the ones that allow ATG to be extremely flexible, configurable, high performance, etc.  For more information on these topics, please see the online product manuals. Modules The first concept is called the 'ATG Module'.  Simply put, you can think of modules as the building blocks for ATG applications.  The ATG development team builds the out of the box product using modules (these are the 'out of the box' modules).  Then, when a customer is implementing their site, they build their own modules that sit 'on top' of the out of the box ATG modules.  Modules can be very simple - containing minimal definition, and perhaps a small amount of configuration.  Alternatively, a module can be rather complex - containing custom logic, database schema definitions, configuration, one or more web applications, etc.  Modules generally will have dependencies on other modules (the modules beneath it).  For example, the Commerce Reference Store module (CRS) requires the DCS (out of the box commerce) module. Modules have a ton of value because they provide a way to decouple a customers implementation from the out of the box ATG modules.  This allows for a much easier job when it comes time to upgrade the ATG platform.  Modules are also a very useful way to group functionality into a single package which can be leveraged across multiple ATG applications. One very important thing to understand about modules, or more accurately, ATG as a whole, is that when you start ATG, you tell it what module(s) you want to start.  One of the first things ATG does is to look through all the modules you specified, and for each one, determine a list of modules that are also required to start (based on each modules dependencies).  Once this final, ordered list is determined, ATG continues to boot up.  One of the outputs from the ordered list of modules is that each module can contain it's own classes and configuration.  During boot, the ordered list of modules drives the unified classpath and configpath.  This is what determines which classes override others, and which configuration overrides other configuration.  Think of it as a layered approach. The structure of a module is well defined.  It simply looks like a folder in a filesystem that has certain other folders and files within it.  Here is a list of items that can appear in a module: MyModule: META-INF - this is required, along with a file called MANIFEST.MF which describes certain properties of the module.  One important property is what other modules this module depends on. config - this is typically present in most modules.  It defines a tree structure (folders containing properties files, XML, etc) that maps to ATG components (these are described below). lib - this contains the classes (typically in jarred format) for any code defined in this module j2ee - this is where any web-apps would be stored. src - in case you want to include the source code for this module, it's standard practice to put it here sql - if your module requires any additions to the database schema, you should place that schema here Here's a screenshots of a module: Modules can also contain sub-modules.  A dot-notation is used when referring to these sub-modules (i.e. MyModule.Versioned, where Versioned is a sub-module of MyModule). Finally, it is important to completely understand how modules work if you are going to be able to leverage them effectively.  There are many different ways to design modules you want to create, some approaches are better than others, especially if you plan to share functionality between multiple different ATG applications. Components A component in ATG can be thought of as a single item that performs a certain set of related tasks.  An example could be a ProductViews component - used to store information about what products the current customer has viewed.  Components have properties (also called attributes).  The ProductViews component could have properties like lastProductViewed (stores the ID of the last product viewed) or productViewList (stores the ID's of products viewed in order of their being viewed).  The previous examples of component properties would typically also offer get and set methods used to retrieve and store the property values.  Components typically will also offer other types of useful methods aside from get and set.  In the ProductViewed component, we might want to offer a hasViewed method which will tell you if the customer has viewed a certain product or not. Components are organized in a tree like hierarchy called 'nucleus'.  Nucleus is used to locate and instantiate ATG Components.  So, when you create a new ATG component, it will be able to be found 'within' nucleus.  Nucleus allows ATG components to reference one another - this is how components are strung together to perform meaningful work.  It's also a mechanism to prevent redundant configuration - define it once and refer to it from everywhere. Here is a screenshot of a component in nucleus:  Components can be extremely simple (i.e. a single property with a get method), or can be rather complex offering many properties and methods.  To be an ATG component, a few things are required: a class - you can reference an existing out of the box class or you could write your own a properties file - this is used to define your component the above items must be located 'within' nucleus by placing them in the correct spot in your module's config folder Within the properties file, you will need to point to the class you want to use: $class=com.mycompany.myclass You may also want to define the scope of the class (request, session, or global): $scope=session In summary, ATG Components live in nucleus, generally have links to other components, and provide some meaningful type of work.  You can configure components as well as extend their functionality by writing code. Repositories Repositories (a.k.a. Data Anywhere Architecture) is the mechanism that ATG uses to access data primarily stored in relational databases, but also LDAP or other backend systems.  ATG applications are required to be very high performance, and data access is critical in that if not handled properly, it could create a bottleneck.  ATG's repository functionality has been around for a long time - it's proven to be extremely scalable.  Developers new to ATG need to understand how repositories work as this is a critical aspect of the ATG architecture.   Repositories essentially map relational tables to objects in ATG, as well as handle caching.  ATG defines many repositories out of the box (i.e. user profile, catalog, orders, etc), and this is comprised of both the underlying database schema along with the associated repository definition files (XML).  It is fully expected that implementations will extend / change the out of the box repository definitions, so there is a prescribed approach to doing this.  The first thing to be sure of is to encapsulate your repository definition additions / changes within your own module (as described above).  The other important best practice is to never modify the out of the box schema - in other words, don't add columns to existing ATG tables, just create your own new tables.  These will help ensure you can easily upgrade your application at a later date. xml-combination As mentioned earlier, when you start ATG, the order of the modules will determine the final configpath.  Files within this configpath are 'layered' such that modules on top can override configuration of modules below it.  This is the same concept for repository definition files.  If you want to add a few properties to the out of the box user profile, you simply need to create an XML file containing only your additions, and place it in the correct location in your module.  At boot time, your definition will be combined (hence the term xml-combination) with the lower, out of the box modules, with the result being a user profile that contains everything (out of the box, plus your additions).  Aside from just adding properties, there are also ways to remove and change properties. types of properties Aside from the normal 'database backed' properties, there are a few other interesting types: transient properties - these are properties that are in memory, but not backed by any database column.  These are useful for temporary storage. java-backed properties - by nature, these are transient, but in addition, when you access this property (by called the get method) instead of looking up a piece of data, it performs some logic and returns the results.  'Age' is a good example - if you're storing a birth date on the profile, but your business rules are defined in terms of someones age, you could create a simple java-backed property to look at the birth date and compare it to the current date, and return the persons age. derived properties - this is what allows for inheritance within the repository structure.  You could define a property at the category level, and have the product inherit it's value as well as override it.  This is useful for setting defaults, with the ability to override. caching There are a number of different caching modes which are useful at different times depending on the nature of the data being cached.  For example, the simple cache mode is useful for things like user profiles.  This is because the user profile will typically only be used on a single instance of ATG at one time.  Simple cache mode is also useful for read-only types of data such as the product catalog.  Locked cache mode is useful when you need to ensure that only one ATG instance writes to a particular item at a time - an example would be a customers order.  There are many options in terms of configuring caching which are outside the scope of this article - please refer to the product manuals for more details. Other important concepts - out of scope for this article There are a whole host of concepts that are very important pieces to the ATG platform, but are out of scope for this article.  Here's a brief description of some of them: formhandlers - these are ATG components that handle form submissions by users. pipelines - these are configurable chains of logic that are used for things like handling a request (request pipeline) or checking out an order. special kinds of repositories (versioned, files, secure, ...) - there are a couple different types of repositories that are used in various situations.  See the manuals for more information. web development - JSP/ DSP tag library - ATG provides a traditional approach to developing web applications by providing a tag library called the DSP library.  This library is used throughout your JSP pages to interact with all the ATG components. messaging - a message sub-system used as another way for components to interact. personalization - ability for business users to define a personalized user experience for customers.  See the other blog posts related to personalization.

    Read the article

  • importing animations in Blender, weird rotations/locations

    - by user975135
    This is for the Blender 2.6 API. There are two problems: 1. When I import a single animation frame from my animation file to Blender, all bones look fine. But when I import multiple (all of the frames), just the first one looks right, seems like newer frames are affected by older ones, so you get slightly off positions/rotations. This is true when both assigning PoseBone.matrix and PoseBone.matrix_basis. bone_index = 0 # for each frame: for frame_index in range(frame_count): # for each pose bone: add a key for bone_name in bone_names: # "bone_names" - a list of bone names I got earlier pose.bones[bone_name].matrix = animation_matrices[frame_index][bone_index] # "animation_matrices" - a nested list of matrices generated from reading a file # create the 'keys' for the Action from the poses pose.bones[bone_name].keyframe_insert('location', frame = frame_index+1) pose.bones[bone_name].keyframe_insert('rotation_euler', frame = frame_index+1) pose.bones[bone_name].keyframe_insert('scale', frame = frame_index+1) bone_index += 1 bone_index = 0 Again, it seems like previous frames are affecting latter ones, because if I import a single frame from the middle of the animation, it looks fine. 2. I can't assign armature-space animation matrices read from a file to a skeleton with hierarchy (parenting). In Blender 2.4 you could just assign them to PoseBone.poseMatrix and bones would deform perfectly whether the bones had a hierarchy or none at all. In Blender 2.6, there's PoseBone.matrix_basis and PoseBone.matrix. While matrix_basis is relative to parent bone, matrix isn't, the API says it's in object space. So it should have worked, but doesn't. So I guess we need to calculate a local space matrix from our armature-space animation matrices from the files. So I tried multiplying it ( PoseBone.matrix ) with PoseBone.parent.matrix.inverted() in both possible orders with no luck, still weird deformations.

    Read the article

  • Quarterly E-Business Suite Upgrade Recommendations: October 2012 Edition

    - by Steven Chan (Oracle Development)
    I've previously published advice on the general priorities for applying EBS updates.  But what are your top priorities for major upgrades to EBS and its technology stack components? Here is a summary of our latest upgrade recommendations for E-Business Suite updates and technology stack components.  These quarterly recommendations are based upon the latest updates to Oracle's product strategies, support deadlines, and newly-certified releases.  Upgrade Recommendations for October 2012 EBS 11i users should upgrade to 12.1.3, or -- if staying on 11i -- should be on the minimum 11i patching baseline, EBS 12.0 users should upgrade to 12.1.3, or -- if staying on 12.0 -- should be on the minimum 12.0 patching baseline, EBS 12.1 users should upgrade to 12.1.3. Oracle Database 10gR2 and 11gR1 users should upgrade to 11gR2 11.2.0.3. EBS 12 users of Oracle Single Sign-On 10g users should migrate to Oracle Access Manager 11g 11.1.1.5. EBS 11i users of  Oracle Single Sign-On 10g users should migrate to Oracle Access Manager 10g 10.1.4.3. Oracle Internet Directory 10g users should upgrade to Oracle Internet Directory 11g 11.1.1.6. Oracle Discoverer users should migrate to Oracle Business Intelligence Enterprise Edition (OBIEE), Oracle Business Intelligence Applications (OBIA), or Discoverer 11g 11.1.1.6. Oracle Portal 10g users should migrate to Oracle WebCenter 11g 11.1.1.6 or upgrade to Portal 11g 11.1.1.6. All Windows desktop users should migrate from JInitiator and older Java releases to JRE 1.6.0_35 or later 1.6 updates. All Firefox users should upgrade to Firefox Extended Support Release 10. Related Articles Extended Support Fees Waived for E-Business Suite 11i and 12.0 On Database Patching and Support: A Primer for E-Business Suite Users On Apps Tier Patching and Support: A Primer for E-Business Suite Users EBS Support Information Center + Patching & Maintenance Advisor Available on My Oracle Support What's the Best Way to Patch an E-Business Suite Environment?

    Read the article

  • tag structured Filesystems

    - by A.Rashad
    I hope this is the correct site, I lose my way between the 4 sister sites :) Let me ask the question this way. all file systems I have seen before are hierarchical, that means a root directory, with some branched directories, and so on until we have files residing in these directories. except for AS/400 file structure, where it has a concept of a Library that serve somehow as a directory but one level only. Why not have directory-less filesystems where files are placed in a single location, but the file identifiers would be referenced by a database of tag/ file relation ships. This way there will be no need for symbolic links, one file may have multiple relations to multiple subjects, not only a single parent directory to contain. I hope the idea is clear.

    Read the article

  • bash starts replacing the characters on the current line insead of moving over to the next line

    - by Lazer
    I use bash shell $ bash --version GNU bash, version 3.2.25(1)-release (x86_64-redhat-linux-gnu) Copyright (C) 2005 Free Software Foundation, Inc. $ Sometimes, when typing a command on the prompt that is pretty lengthy and does not fit in the current line, instead of displaying the extra characters on the next line, bash starts again on the current line.. replacing the characters that were there and making a mess. what should happen : |---------------------------------------------| | $ my big long command takes a lot of argumen| | s and does not fit in a single line | | | |---------------------------------------------| what happens instead : |---------------------------------------------| | s and does not fit in a single linef argumen| | | | | |---------------------------------------------| The issue is intemittent If I resize my shell window to really small width, normal behaviour is restored Does anyone have any idea what is happening here? $ echo $TERM xterm $ echo $PS1 \[\e[30m\][\t]\[\e[0m\]\[\e]0;\w\a\]\[\e[30m\][\W]$ $

    Read the article

  • Creating sitemap for Googlebot - how to mark dynamic content / dynamic subpages?

    - by ojek
    I have a website that is an Internet forum. This forum has many categories, and a single category page that contains a lot of subpages with listed threads. This Internet forum is brand new, and about a week ago I filled it with a few hundred thousand threads. I then looked at my Google Webmasters Tools page to see any changes in indexing, but the index went up from 300 to about 1200, so that means it did not index my added threads (although it added something). The following is what my sitemap.xml contains, which I uploaded to their website. Of course there is a lot more code, this is just a snippet for a single category. In my real sitemap file I have all the categories listed as below: <url> <loc>http://mysite.com/Forums/Physics</loc> <changefreq>hourly</changefreq> </url> Now, I would expect Googlebot to go into mysite.com/Forums/Physics, and crawl through all the subpages with thread links, and then crawl inside of each thread and index its content. How can I achieve this? Also if this is unclear, I will add a real link to my website.

    Read the article

  • Dual-bootable Virtual Machine

    - by ojrac
    My work computer is a Linux desktop with a Windows 7 virtual machine for Visual Studio and IE testing. I'm very picky, and I don't want to configure two Windows installs... but I can't think of a way to do this without running afoul of Windows activation. I've already set up VirtualBox to run my VM off a physical hard drive, and grub isn't too hard to configure. But it'd be a waste of time without solving the activation problem. Is there any way I can boot into a single install of Windows as a virtual machine and on actual hardware without having to reactivate (until I'm eventually flagged as a pirate) every time I switch between the two? Is there any MS-endorsed way to use a single installed license with two sets of hardware?

    Read the article

  • TrueCrypt & upgrading your hard drive?

    - by Danielb
    I currently use TrueCrypt to encrypt the hard drive in a Win7 laptop (everything in a single partition). I am looking to upgrade the hard drive to a model with significantly more storage capacity. I've had a look through the documentation but I couldn't see anything about this particular scenario. I assume I need to do something like the following: Remove the encryption from the existing drive. Clone the existing drive image onto the new hard drive. Physically install the new drive into the laptop. Resize the single partition to use all the space in the new drive. Encrypt all of the new bigger drive with TrueCrypt.

    Read the article

  • Firefox Master Password (ssh-agent)

    - by BCable
    I use the master password feature of Firefox, and I also use SSH keys to login to a bunch of UNIX machines. For SSH, there is a very useful application called ssh-agent that runs in the background knowing the required information about unlocking the key so you don't have to type the question every single time you want to connect. I open and close Firefox a lot, so I was curious, is there a way to have Firefox run in the background (preferrably doing nothing, but the whole process would be fine I guess as well) so that I don't have to type my master password every single time I open Firefox? Thanks!

    Read the article

  • DC on Hyper V Host

    - by Saif Khan
    I've read a few similar questions but wasn't clear. I have a small office (13) users and recently purchased a new single server (this is all I have to work with for now). Server specs Memory - 16GB Drives - 6 SCSI (2TB) PROCESSOR - DUAL QUAD I plan on making the Hyper-V host the DC and then 2 VMs, one for a file server and the other an application server. Question - since I am restricted to this single server, would it be a big issue making the Hyper-V host the domain controller? Your input greatly appreciated.

    Read the article

  • How can I insert bullet point data into Microsoft Excel spreadsheet?

    - by REACHUS
    Sometimes when I make some research, I gather data that should be presented in bullet points, preferably in a single cell (as it is kind of data I would not process in any way in the future). I am looking for a way to make it readable for other people using the spreadsheet (on the screen, as well as when they print the spreadsheet). I would like to make something like that: ———————————————————— | * bullet point 1 | | * bullet point 2 | | * bullet point 3 | ———————————————————— So far the only solution is to edit something presented above in a text editor and then paste it to Excel (as I cannot really make bullet points in a single cell). Is there any better solution?

    Read the article

  • Server-to-Switch Trunking in Procurve switch, what does this mean?

    - by MattUebel
    I am looking to set up switch redundancy in a new datacenter environment. IEEE 802.3ad seems to be the go-to concept on this, at least when paired with a technology that gets around the "single switch" limitation for the link aggregation. Looking through the brochure for a procurve switch I see: Server-to-Switch Distributed Trunking, which allows a server to connect to two switches with one logical trunk; increases resiliency and enables load sharing in virtualized data centers http://www.procurve.com/docs/products/brochures/5400_3500%20Product%20Brochure4AA0-4236ENW.pdf I am trying to figure out how this relates to the 802.3a standard, as it seems that it would give me what I want (one server has 2 nics, each of which is connected to separate switches, together forming a single logical nic which would provide the happy redundancy we want), but I guess I am looking for someone familiar with this concept and could add to it.

    Read the article

  • avoid dupllcate copies in thunderbird when using gmail labels

    - by mandy
    I like the system of multiple gmail labels but am concerned about space used. I think that TB stores a separate copy of the whole email in every folder corresponding to a gmail label [including a copy in the "allmail" folder along with a separate copy in the each of the other folders]. Can I arrange to store a single copy of the email in TB,[perhaps in the allmail folder] with a shortcut type pointer from the individual folders to that single copy.[i.e for every email with one gmail label [eg personal] there are already 2 copies in TB, one in the allmail folder and a second in the "personal" folder

    Read the article

  • Ubuntu displaying GDM but no login

    - by Shawn
    Ubuntu (Wubi, Lucid Lynx) boots and shows the login screen itself with the background and plays the boot sound but a list of users is never displayed. A mouse is on screen and I can move it but, alas, it does nothing. Dropping to a virtual term with CTRL+Alt+F# drops me to a cursor but I can't actually input anything. I can't boot into single-user with GRUB since it's Wubi and it never specifies a boot kernel directly in GRUB's initial menu.lst (only in files that it then reads from). Other details that may be helpful: Single monitor Same video card that's been working for months No new hardware Edit: I ssh'd in since it evidently booted up the sshd which is handy. dpkg-reconfigure gdm didn't do anything helpful. I do, however, get a "no seat-id found" when manually running it.

    Read the article

  • How can I discourage the use of Access?

    - by Greg Buehler
    Lets pretend that a very large company (revenue numbers with more than 8 figures) is looking to do a refresh on a software system, particularly the dashboard used by employees. This system was originally put together in the early 1990's to handle inventory tracking and storage across a variety of facilities (10+). Since this large company is now in the process of implementing some of these inventory processes with SAP they are in need of a major refresh. The existing system: Microsoft Access project performs dashboard duties Unique shipping/receiving configurations at different facilities require unique forms and queries within the Access project Uses 3rd party libraries referenced by Access to directly interface with at control system (read: motors, conveyors, and counters) Individual SQL Server 2000 instances (some traces of pre-update SQL Server 6.0 documents) at each facility The Issue: This system started as a home brewed inventory tracking scheme with a single internal sponsor who is still in charge of the technical direction. The original sponsor prescribing the desired deliverables that are being called for in the current RFP. The RFP describes a system based around a single Access project. Any suggestion that Access is ill suited for a project of this scope are shot down under the reasoning that "it works for the scope now". Are there any case studies, notices, or statements that can be used to disuade this potential customer from repeating their mistake? Does Microsoft make any statements directly about when it is highly recommended to ditch Access?

    Read the article

  • Raid Shows Up as Multiple Drives - Can't Mount

    - by manyxcxi
    I have a single hard drive that the OS is installed on and I have Sil raid card installed with two matching 500GB hdds set up in Raid 0 and formatted- they're completely empty. For whatever reason they are showing up as /dev/sdb and /dev/sdc and not as a single hard drive. I used fdisk to format both raid drives as Linux raid auto (fd) but I cannot mount either device and dmraid doesn't seem to want to work, what step am I missing? When I installed 9.04 oh so long ago it seems like it recognized and automatically did everything that needed to be done, now I'm stuck. dmraid Output root@tripoli:~# dmraid -r /dev/sdc: sil, "sil_biaebhadcfcb", stripe, ok, 976771072 sectors, data@ 0 /dev/sdb: sil, "sil_biaebhadcfcb", stripe, ok, 976771072 sectors, data@ 0 root@tripoli:~# dmraid -ay RAID set "sil_biaebhadcfcb" already active fdisk Output root@tripoli:~# fdisk -l Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000b9b01 Device Boot Start End Blocks Id System /dev/sda1 * 1 32 248832 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 32 60802 488134657 5 Extended /dev/sda5 32 60802 488134656 8e Linux LVM Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x6ead5c9a Device Boot Start End Blocks Id System /dev/sdb1 1 60801 488384001 fd Linux raid autodetect Disk /dev/sdc: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xe6e2af28 Device Boot Start End Blocks Id System /dev/sdc1 1 60801 488384001 fd Linux raid autodetect

    Read the article

  • XNA Skinned Animated Mesh Rendering Exported from Maya

    - by Devin Garner
    I am working on translating an old RTS game engine I wrote from DirectX9 to XNA. My old models didn't have animation & are an old format, so I'm trying with an FBX file. I temporarily "borrowed" a model from League of Legends just to test if my rendering is working correctly. I imported the mesh/bones/skin/animation into Maya 2012 using an "unnamed" 3rd-party import tool. (obviously I'll have to get legit models later, but I just want to test if my programming is correct). Everything looks correct in maya and it renders the animations flawlessly. I exported everything into a single FBX file (with only a single animation). I then tried to load this model using the example at the following site: http://create.msdn.com/en-US/education/catalog/sample/skinned_model With my exported FBX, the animation looks correct for most of the frames, however at random times it screws up for a split second. Basically, the body/arms/head will look right, but the leg/foot will shoot out to a random point in space for a second & then go back to the normal position. The original FBX from the sample looks correct in my program. It seems odd that my model was imported into maya wrong, since it displays fine in Maya. So, I'm thinking either I'm exporting it wrong, or the sample code is bad & the model from the sample caters to the samples bad code. I'm new to 3D programming & maya, so chances are I'm doing something wrong in the export. I'm using mostly the defaults, but I've tried all 3 interpolation modes (quaternion, euler, resample). Thanks

    Read the article

  • Of transactions and Mongo

    - by Nuri Halperin
    Originally posted on: http://geekswithblogs.net/nuri/archive/2014/05/20/of-transactions-and-mongo-again.aspxWhat's the first thing you hear about NoSQL databases? That they lose your data? That there's no transactions? No joins? No hope for "real" applications? Well, you *should* be wondering whether a certain of database is the right one for your job. But if you do so, you should be wondering that about "traditional" databases as well! In the spirit of exploration let's take a look at a common challenge: You are a bank. You have customers with accounts. Customer A wants to pay B. You want to allow that only if A can cover the amount being transferred. Let's looks at the problem without any context of any database engine in mind. What would you do? How would you ensure that the amount transfer is done "properly"? Would you prevent a "transaction" from taking place unless A can cover the amount? There are several options: Prevent any change to A's account while the transfer is taking place. That boils down to locking. Apply the change, and allow A's balance to go below zero. Charge person A some interest on the negative balance. Not friendly, but certainly a choice. Don't do either. Options 1 and 2 are difficult to attain in the NoSQL world. Mongo won't save you headaches here either. Option 3 looks a bit harsh. But here's where this can go: ledger. See, and account doesn't need to be represented by a single row in a table of all accounts with only the current balance on it. More often than not, accounting systems use ledgers. And entries in ledgers - as it turns out – don't actually get updated. Once a ledger entry is written, it is not removed or altered. A transaction is represented by an entry in the ledger stating and amount withdrawn from A's account and an entry in the ledger stating an addition of said amount to B's account. For sake of space-saving, that entry in the ledger can happen using one entry. Think {Timestamp, FromAccountId, ToAccountId, Amount}. The implication of the original question – "how do you enforce non-negative balance rule" then boils down to: Insert entry in ledger Run validation of recent entries Insert reverse entry to roll back transaction if validation failed. What is validation? Sum up the transactions that A's account has (all deposits and debits), and ensure the balance is positive. For sake of efficiency, one can roll up transactions and "close the book" on transactions with a pseudo entry stating balance as of midnight or something. This lets you avoid doing math on the fly on too many transactions. You simply run from the latest "approved balance" marker to date. But that's an optimization, and premature optimizations are the root of (some? most?) evil.. Back to some nagging questions though: "But mongo is only eventually consistent!" Well, yes, kind of. It's not actually true that Mongo has not transactions. It would be more descriptive to say that Mongo's transaction scope is a single document in a single collection. A write to a Mongo document happens completely or not at all. So although it is true that you can't update more than one documents "at the same time" under a "transaction" umbrella as an atomic update, it is NOT true that there' is no isolation. So a competition between two concurrent updates is completely coherent and the writes will be serialized. They will not scribble on the same document at the same time. In our case - in choosing a ledger approach - we're not even trying to "update" a document, we're simply adding a document to a collection. So there goes the "no transaction" issue. Now let's turn our attention to consistency. What you should know about mongo is that at any given moment, only on member of a replica set is writable. This means that the writable instance in a set of replicated instances always has "the truth". There could be a replication lag such that a reader going to one of the replicas still sees "old" state of a collection or document. But in our ledger case, things fall nicely into place: Run your validation against the writable instance. It is guaranteed to have a ledger either with (after) or without (before) the ledger entry got written. No funky states. Again, the ledger writing *adds* a document, so there's no inconsistent document state to be had either way. Next, we might worry about data loss. Here, mongo offers several write-concerns. Write-concern in Mongo is a mode that marshals how uptight you want the db engine to be about actually persisting a document write to disk before it reports to the application that it is "done". The most volatile, is to say you don't care. In that case, mongo would just accept your write command and say back "thanks" with no guarantee of persistence. If the server loses power at the wrong moment, it may have said "ok" but actually no written the data to disk. That's kind of bad. Don't do that with data you care about. It may be good for votes on a pole regarding how cute a furry animal is, but not so good for business. There are several other write-concerns varying from flushing the write to the disk of the writable instance, flushing to disk on several members of the replica set, a majority of the replica set or all of the members of a replica set. The former choice is the quickest, as no network coordination is required besides the main writable instance. The others impose extra network and time cost. Depending on your tolerance for latency and read-lag, you will face a choice of what works for you. It's really important to understand that no data loss occurs once a document is flushed to an instance. The record is on disk at that point. From that point on, backup strategies and disaster recovery are your worry, not loss of power to the writable machine. This scenario is not different from a relational database at that point. Where does this leave us? Oh, yes. Eventual consistency. By now, we ensured that the "source of truth" instance has the correct data, persisted and coherent. But because of lag, the app may have gone to the writable instance, performed the update and then gone to a replica and looked at the ledger there before the transaction replicated. Here are 2 options to deal with this. Similar to write concerns, mongo support read preferences. An app may choose to read only from the writable instance. This is not an awesome choice to make for every ready, because it just burdens the one instance, and doesn't make use of the other read-only servers. But this choice can be made on a query by query basis. So for the app that our person A is using, we can have person A issue the transfer command to B, and then if that same app is going to immediately as "are we there yet?" we'll query that same writable instance. But B and anyone else in the world can just chill and read from the read-only instance. They have no basis to expect that the ledger has just been written to. So as far as they know, the transaction hasn't happened until they see it appear later. We can further relax the demand by creating application UI that reacts to a write command with "thank you, we will post it shortly" instead of "thank you, we just did everything and here's the new balance". This is a very powerful thing. UI design for highly scalable systems can't insist that the all databases be locked just to paint an "all done" on screen. People understand. They were trained by many online businesses already that your placing of an order does not mean that your product is already outside your door waiting (yes, I know, large retailers are working on it... but were' not there yet). The second thing we can do, is add some artificial delay to a transaction's visibility on the ledger. The way that works is simply adding some logic such that the query against the ledger never nets a transaction for customers newer than say 15 minutes and who's validation flag is not set. This buys us time 2 ways: Replication can catch up to all instances by then, and validation rules can run and determine if this transaction should be "negated" with a compensating transaction. In case we do need to "roll back" the transaction, the backend system can place the timestamp of the compensating transaction at the exact same time or 1ms after the original one. Effectively, once A or B visits their ledger, both transactions would be visible and the overall balance "as of now" would reflect no change.  The 2 transactions (attempted/ reverted) would be visible , since we do actually account for the attempt. Hold on a second. There's a hole in the story: what if several transfers from A to some accounts are registered, and 2 independent validators attempt to compute the balance concurrently? Is there a chance that both would conclude non-sufficient-funds even though rolling back transaction 100 would free up enough for transaction 117 (some random later transaction)? Yes. there is that chance. But the integrity of the business rule is not compromised, since the prime rule is don't dispense money you don't have. To minimize or eliminate this scenario, we can also assign a single validation process per origin account. This may seem non-scalable, but it can easily be done as a "sharded" distribution. Say we have 11 validation threads (or processing nodes etc.). We divide the account number space such that each validator is exclusively responsible for a certain range of account numbers. Sounds cunningly similar to Mongo's sharding strategy, doesn't it? Each validator then works in isolation. More capacity needed? Chop the account space into more chunks. So where  are we now with the nagging questions? "No joins": Huh? What are those for? "No transactions": You mean no cross-collection and no cross-document transactions? Granted - but don't always need them either. "No hope for real applications": well... There are more issues and edge cases to slog through, I'm sure. But hopefully this gives you some ideas of how to solve common problems without distributed locking and relational databases. But then again, you can choose relational databases if they suit your problem.

    Read the article

  • Star schema [fact 1:n dimension]...how?

    - by Mike Gates
    I am a newcomer to data warehouses and have what I hope is an easy question about building a star schema: If I have a fact table where a fact record naturally has a one-to-many relationship with a single dimension, how can a star schema be modeled to support this? For example: Fact Table: Point of Sale entry (the measurement is DollarAmount) Dimension Table: Promotions (these are sales promotions in effect when a sale was made) The situation is that I want a single Point Of Sale entry to be associated with multiple different Promotions. These Promotions cannot be their own dimensions as there are many many many promotions. How do I do this?

    Read the article

  • ganglia graphs like munin for cpu, etc?

    - by CarpeNoctem
    I'm coming from munin and a CPU graph contains data for system, user, nice, etc ALL on one graph. I just installed ganglia and setup the basic monitoring. It appears that each type of cpu data is a separate graph! WTF is this and can I change the defaults to combine these into a single per host? That is my question, how do I combine cpu data into a single graph. Also, can I change the layout to something closer to munin's day-week side-by-side layout? I'm trying to be impartial and give ganglia a chance. ;)

    Read the article

  • Introducing Oracle Multitenant

    - by OracleMultitenant
    0 0 1 1142 6510 Oracle Corporation 54 15 7637 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-fareast-language:JA;} The First Database Designed for the Cloud Today Oracle announced the general availability (GA) of Oracle Database 12c, the first database designed for the Cloud. Oracle Multitenant, new with Oracle Database 12c, is a key component of this – a new architecture for consolidating databases and simplifying operations in the Cloud. With this, the inaugural post in the Multitenant blog, my goal is to start the conversation about Oracle Multitenant. We are very proud of this new architecture, which we view as a major advance for Oracle. Customers, partners and analysts who have had previews are very excited about its capabilities and its flexibility. This high level review of Oracle Multitenant will touch on our design considerations and how we re-architected our database for the cloud. I’ll briefly describe our new multitenant architecture and explain it’s key benefits. Finally I’ll mention some of the major use cases we see for Oracle Multitenant. Industry Trends We always start by talking to our customers about the pressures and challenges they’re facing and what trends they’re seeing in the industry. Some things don’t change. They face the same pressures and the same requirements as ever: Pressure to do more with less; be faster, leaner, cheaper, and deliver services 24/7. Big companies have achieved scale. Now they want to realize economies of scale. As ever, DBAs are faced with the challenges of patching and upgrading large numbers of databases, and provisioning new ones.  Requirements are familiar: Performance, scalability, reliability and high availability are non-negotiable. They need ever more security in this threatening climate. There’s no time to stop and retool with new applications. What’s new are the trends. These are the techniques to use to respond to these pressures within the constraints of the requirements. With the advent of cloud computing and availability of massively powerful servers – even engineered systems such as Exadata – our customers want to consolidate many applications into fewer larger servers. There’s a move to standardized services – even self-service. Consolidation Consolidation is not new; companies have tried various different approaches to consolidation of databases in the cloud. One approach is to partition a powerful server between several virtual machines, one per application. A downside of this is that you have the resource and management overheads of OS and RDBMS per VM – that is, per application. Another is that you have replaced physical sprawl with virtual sprawl and virtual sprawl is still expensive to manage. In the dedicated database model, we have a single physical server supporting multiple databases, one per application. So there’s a shared OS overhead, but RDBMS process and memory overhead are replicated per application. Let's think about our traditional Oracle Database architecture. Every time we create a database, be it a production database, a development or a test database, what do we do? We create a set of files, we allocate a bunch of memory for managing the data, and we kick off a series of background processes. This is replicated for every one of the databases that we create. As more and more databases are fired up, these replicated overheads quickly consume the available server resources and this limits the number of applications we can run on any given server. In Oracle Database 11g and earlier the highest degree of consolidation could be achieved by what we call schema consolidation. In this model we have one big server with one big database. Individual applications are installed in separate schemas or table-owners. Database overheads are shared between all applications, which affords maximum consolidation. The shortcomings are that application changes are often required. There is no tenant isolation. One bad apple can spoil the whole batch. New Architecture & Benefits In Oracle Database 12c, we have a new multitenant architecture, featuring pluggable databases. This delivers all the resource utilization advantages of schema consolidation with none of the downsides. There are two parts to the term “pluggable database”: "pluggable", which is new, and "database", which is familiar.  Before we get to the exciting new stuff let’s discuss what hasn’t changed. A pluggable database is a fully functional Oracle database. It’s not watered down in any way. From the perspective of an application or an end user it hasn’t changed at all. This is very important because it means that no application changes are required to adopt this new architecture. There are many thousands of applications built on Oracle databases and they are all ready to run on Oracle Multitenant. So we have these self-contained pluggable databases (PDBs), and as their name suggests, they are plugged into a multitenant container database (CDB). The CDB behaves as a single database from the operations point of view. Very much as we had with the schema consolidation model, we only have a single set of Oracle background processes and a single, shared database memory requirement. This gives us very high consolidation density, which affords maximum reduction in capital expenses (CapEx). By performing management operations at the CDB level – “managing many as one” – we can achieve great reductions in operating expenses (OpEx) as well, but we retain granular control where appropriate. Furthermore, the “pluggability” capability gives us portability and this adds a tremendous amount of agility. We can simply unplug a PDB from one CDB and plug it into another CDB, for example to move it from one SLA tier to another. I'll explore all these new capabilities in much more detail in a future posting.  Use Cases We can identify a number of use cases for Oracle Multitenant. Here are a few of the major ones. 0 0 1 113 650 Oracle Corporation 5 1 762 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-fareast-language:JA;} Development / Testing where individual engineers need rapid provisioning and recycling of private copies of a few "master test databases" Consolidation of disparate applications using fewer, more powerful servers Software as a Service deploying separate copies of identical applications to individual tenants Database as a Service typically self-service provisioning of databases on the private cloud Application Distribution from ISV / Installation by Customer Eliminating many typical installation steps (create schema, import seed data, import application code PL/SQL…) - just plug in a PDB! High volume data distribution literally via disk drives in envelopes distributed by truck! - distribution of things like GIS or MDM master databases …various others! Benefits Previous approaches to consolidation have involved a trade-off between reductions in Capital Expenses (CapEx) and Operating Expenses (OpEx), and they’ve usually come at the expense of agility. With Oracle Multitenant you can have your cake and eat it: Minimize CapEx More Applications per server Minimize OpEx Manage many as one Standardized procedures and services Rapid provisioning Maximize Agility Cloning for development and testing Portability through pluggability Scalability with RAC Ease of Adoption Applications run unchanged It’s a pure deployment choice. Neither the database backend nor the application needs to be changed. In future postings I’ll explore various aspects in more detail. However, if you feel compelled to devour everything you can about Oracle Multitenant this very minute, have no fear. Visit the Multitenant page on OTN and explore the various resources we have available there. Among these, Oracle Distinguished Product Manager Bryn Llewellyn has written an excellent, thorough, and exhaustively detailed White Paper about Oracle Multitenant, which is available here.  Follow me  I tweet @OraclePDB #OracleMultitenant

    Read the article

  • High availability for Windows Service under Windows Server 2003

    - by empi
    Hi. I have a following situation: I need to deploy a windows service that listens for incoming request on tcp port (basically WCF service). I have a High Availability requirement - the service must be deployed on two servers and if the service stops (only the service, not the whole server) on one server, all the requests must be redirected to the second one. For me it looks like a basic failover scenario. How can I achieve this on Windows Server 2003? Should I use Microsoft Cluster Service or Network Load Balancing? The important part is that the process of swapping the servers should not concern the clients (the client must see only single address / single host or domain name). Thanks in advance for help.

    Read the article

  • Should I have a heroku worker dyno for poll a AWS SQS?

    - by Luccas
    Im confusing about where should I have a script polling an Aws Sqs inside a Rails application. If I use a thread inside the web app probably it will use cpu cycles to listen this queue forever and then affecting performance. And if I reserve a single heroku worker dyno it costs $34.50 per month. It makes sense to pay this price for it for a single queue poll? Or it's not the case to use a worker for it? The script code: queue = AWS::SQS::Queue.new(SQSADDR['my_queue']) queue.poll(:idle_timeout => 20) do |msg| # code here end I need help!! Thanks

    Read the article

  • Where can I find design exercises to work on?

    - by Oak
    I feel it's important to continue practicing my problem-solving skills. Writing my own mini-projects is one way, but another is to try and solve problems posted online. It's easy to find interesting programming quizzes online that require applying clever algorithms to solve - Project Euler is one well-known example. However, in a lot of real-life projects the design of the software - especially in the initial phases - has a large impact and at later stages it cannot be tweaked as easily as plain algorithms. In order to improve these skills, I'm looking for any collection of design problems. When I say "design", I mean the abstract design of a software solution - for example what modules will there be and what are the dependencies between them, how data will flow in the program, what sort of data needs to be saved in the database, etc. Design problems are those problems that are critical to solve in the early stages of any project, but their solution is a whiteboard diagram without a single line of code. Of course these sort of problems do not have a single correct solution, but I'll be especially happy with any place that also displays pros and cons of the typical solutions that might be used to approach the problem.

    Read the article

< Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >