Search Results

Search found 31606 results on 1265 pages for 'generate table'.

Page 490/1265 | < Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >

  • Webcast Series Part II: Integrated Infrastructure and Lifecycle Solutions for Capital Assets - A New Delivery Model

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif";} Register today for the second part of this webcast series on Thursday, November 29, 2012 10:00 a.m. PT/ 1:00 p.m. ET Project Portfolio Management solutions have immediate and lasting impact o both Provider’s and Contractor’s bottom lines by helping to manage the costs and risks of healthcare infrastructure projects from planning through handing-over and operating. During this Webcast, Integrated Infrastructure and Lifecycle Solutions for Capital Assets - A New Delivery Model, Garrett Harley and Thomas Koulouris will continue their discussion on Healthcare Infrastructure strategy changes and will cover the following topics: The shift in the Healthcare infrastructure strategy and how it will impact providers and contractors The Integrated Infrastructure & Lifecycle Solutions for Capital Assets and how these solutions help your business Communication and integration between providers and contractors and why it is so important to your bottom line The new integrated delivery system in Healthcare infrastructure and how Project Portfolio Management is so critical to the success of that system.

    Read the article

  • On WHM/cPanel how can you tell which packages are in use?

    - by Paul Masri
    I want to find out a) which packages are currently in use, and b) which accounts are using them, so I can do some housekeeping. If I go to 'Show Reseller Accounts', it'll list all the accounts by reseller and I can see which accounts use which, so I could go the longhand route of counting through them one by one, but I wondered if there's a way of getting a simple listing. NB: I've seen that if I go to 'Edit Reseller privileges & nameservers' for one of my resellers, I'll see a 'Package limits' table which has a 'Current' column that shows exactly what I'm looking for... but it's for that reseller only, and I don't get this table for resellers that have no package limits (e.g. me).

    Read the article

  • Partner OBI 11g 5-Day Hands-on Training Workshop

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;} 14 - 18 January 2013, Oracle Reading (UK) REGISTER HERE NOW This 5 day hands-on workshop provides attendees a hands-on experience to practice with OBI11g environment. Participants will gain in-depth understanding of new architecture of OBIEE 11g, security mode, installation/configuration as well as reporting aspects like new ROLAP/MOLAP style hierarchical browsing, new chart types, Action Framework and Visualization. Please note that attendees are required to have a laptop.  This training is only for OPN member Partners. View here laptop requirements and detailed agenda.

    Read the article

  • Intelligent Conflict Detection and Resolution

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Conflict Detection and Resolution in Oracle GoldenGate11gR2 has gone through a significant overhaul. The improvements that have been made to this area are substantial and will make it easier for customers to implement complex, heterogeneous GoldenGate configurations. GoldenGate has provided methods for conflict detection and resolution for a number of past releases, but at Oracle we have the opportunity to take advantage of some of the great ideas in this area. Oracle has had feature rich conflict detection and resolution framework in other products, which has been implemented in Oracle GoldenGate 11gR2. These improvements are geared toward helping customers more easily implement advanced configurations that require conflict detection and resolution by providing a robust framework for conflict detection for all DML statements and resolution via pre-built methods, all with less code and simpler syntax than in prior releases. Conflict Detection and Resolution in Oracle GoldenGate 11gR2 is available for our supported heterogeneous platforms, which includes Oracle Database, MySQL, Sybase ASE, SQL Server, and DB2 Linux, Unix, Windows, z/OS, plus DB2 on i Series, which is newly supported in this release. Additional information on the Conflict Detection and Resolution capabilities can be found in our documentation. 

    Read the article

  • Tuesday at Oracle OpenWorld 2012 - Must See Session: “Oracle Fusion Applications: Best Practices in Integration Design Patterns”

    - by Lionel Dubreuil
    Don’t miss this “CON8685 - Oracle Fusion Applications: Best Practices in Integration Design Patterns “ session: Speakers: Rajesh Raheja - Senior Director, Development, Oracle Ravi Sankaran - Director, Applications Development, Oracle Date: Tuesday, Oct 2 Time: 1:15 PM - 2:15 PM Location: Palace Hotel - Telegraph Oracle Fusion Applications provide various ways to integrate their functional capabilities with other Oracle applications as well as third-party and legacy applications. In this session, you will learn the patterns used when communicating with Oracle Fusion Applications with a SOA approach. It addresses items related to identifying the integration artifacts available, also known as assets, in Oracle Enterprise Repository; how to invoke synchronous and asynchronous Web services; importing and exporting bulk data; and any integration issues to look out for. The patterns will be applicable to on-premises and SaaS/cloud deployment modes and are indicated as such. Objectives for this session are to: Highlight the various ways to integrate with Oracle Fusion Applications Showcase use of Oracle Fusion Middleware technologies for integration Describe best practices and design patterns for integration Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";}

    Read the article

  • My First Post with Windows Live Writer

    - by geekrutherford
    I receive daily newsletters from DotNetSlackers regarding various .NET topics.  Today I read an article from an apparent Microsoft employee who gave some insight to the organizational culture within the company.  Always on the lookout for new tools and technologies I noted that he used Windows Live Writer for editing and managing his blog content.  I thought I’d give it a try. Let’s try adding a picture and adjusting it’s placement within this blog post relative to this text. … Adding the image is quite simple using the “Insert” options given to the right of the blog content editor.  Inserting without using a table makes aligning text just so impossible, but that’s inherent with any WYSIWYG editor.  Instead using a table with at least 2 columns (1 for text and 1 for the image) works best.   Let’s try adding a map!   That’s pretty sweet!  You can map to any location within the editor itself.  A dialog opens which utilizes Bing! allowing you to enter the address, etc. Well, that’s enough for me.  Time to pimp this to my wife for use on our family blog.  BTW, Windows Live Writer allows you to post content to a number of blogging sites…fantastic!!!

    Read the article

  • Is it safer to use the same IV all times data are encrypted, or use a dynamic IV that is sent together the encrypted text? [closed]

    - by kiamlaluno
    When encrypting data that is then send to a server, is it better to always use the same IV, which is already known from the receiving server, or use a dynamic IV that is then sent to the receiving server? I am referring to the case the remote server receives data from another server, or from a client application, and executes operations on a database table, in the table row identified by the received data. Which of the following PHP snippets is preferable? $iv = mcrypt_create_iv(mcrypt_enc_get_iv_size($td), MCRYPT_RAND); $ks = mcrypt_enc_get_key_size($td); $key = substr(md5('very secret key'), 0, $ks); mcrypt_generic_init($td, $key, $iv); $encrypted = mcrypt_generic($td, 'This is very important data'); send_encripted_data(combine_iv_encrypted_text($iv, $encrypted)); $ks = mcrypt_enc_get_key_size($td); $key = substr(md5('very secret key'), 0, $ks); mcrypt_generic_init($td, $key, $iv); send_encripted_data(mcrypt_generic($td, 'This is very important data')); In which way is one of the snippets more vulnerable than the other one?

    Read the article

  • Oracle Enterprise Manager 12c R3 introduces advancements in cloud lifecycle and operations management

    - by Anand Akela
    Oracle Enterprise Manager 12c Release 3 (R3) was announced ( Press Release ) earlier today. It is now available for download at  OTN . This latest release features improvements in several areas, including: Improvements to Private Cloud and Engineered Systems Management Expanded Middleware and Application Management Capabilities Efficiency Gains for Enterprise manager Users in EM’s Enterprise-Ready Framework You can learn more about what's new in the Oracle Enterprise Manager 12c R3 in the Enterprise Manager 12c documentation . You will see more blogs and details about the new features during the next few weeks. Please let us what On July 18th, you can join us at a webcast to hear Thomas Kurian, EVP of Product Development on what Oracle Engineering has achieved with Oracle Enterprise Manager 12c Release 3 to address these challenges. Later, during this webcast, Oracle experts will discuss the latest capabilities in Oracle Enterprise Manager 12c Release 3 for cloud lifecycle and operations management. The presentation will be followed by a live Q&A session with Oracle experts. You can also join us online on Twitter to get your specific questions answered. Please use hash tag #em12c to join the conversation. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Register Now for the Webcast! Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Slowly Changing Dimensions handling in PowerPivot (and BISM?)

    - by Marco Russo (SQLBI)
    During the PowerPivot Workshop in London we received many interesting questions and Alberto had the inspiration to write this nice post about Slowly Changing Dimensions handling in PowerPivot. It is interesting the consideration about SCD Type I attributes in a SCD Type II dimension – you can probably generate them in a more dynamic way in PowerPivot (thanks to Vertipaq and DAX) instead of relying on a relational table containing all the data you need, which usually requires a more complex ETL process....(read more)

    Read the article

  • Use MvcContrib Grid to Display a Grid of Data in ASP.NET MVC

    The past six articles in this series have looked at how to display a grid of data in an ASP.NET MVC application and how to implement features like sorting, paging, and filtering. In each of these past six tutorials we were responsible for generating the rendered markup for the grid. Our Views included the <table> tags, the <th> elements for the header row, and a foreach loop that emitted a series of <td> elements for each row to display in the grid. While this approach certainly works, it does lead to a bit of repetition and inflates the size of our Views. The ASP.NET MVC framework includes an HtmlHelper class that adds support for rendering HTML elements in a View. An instance of this class is available through the Html object, and is often used in a View to create action links (Html.ActionLink), textboxes (Html.TextBoxFor), and other HTML content. Such content could certainly be created by writing the markup by hand in the View; however, the HtmlHelper makes things easier by offering methods that emit common markup patterns. You can even create your own custom HTML Helpers by adding extension methods to the HtmlHelper class. MvcContrib is a popular, open source project that adds various functionality to the ASP.NET MVC framework. This includes a very versatile Grid HTML Helper that provides a strongly-typed way to construct a grid in your Views. Using MvcContrib's Grid HTML Helper you can ditch the <table>, <tr>, and <td> markup, and instead use syntax like Html.Grid(...). This article looks at using the MvcContrib Grid to display a grid of data in an ASP.NET MVC application. A future installment will show how to configure the MvcContrib Grid to support both sorting and paging. Read on to learn more! Read More >

    Read the article

  • How to Make a Website For Free

    If you have an idea for a new website, but you don't feel like spending an arm and a leg having one built for you, don't. There are many websites online that you can access that will help you create a website of your own and won't charge you a dime for it. Most of these free website builders are willing to help you create a website for free because they then advertise on your website and generate revenues in that fashion.

    Read the article

  • Apress Deal of the Day - 14/Mar/2011

    - by TATWORTH
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} jQuery Recipes: A Problem-Solution Approach jQuery is one of today’s most popular JavaScript web application development frameworks and libraries. jQuery Recipes can get you started with jQuery quickly and easily, and it will serve as a valuable long-term reference. $44.99 | Published Jan 2010 | Bintu Harwani

    Read the article

  • Support Changes for PeopleSoft Applications

    - by Marc Weintraub
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman";} To ensure Oracle’s PeopleSoft applications customers continue to receive world class support from Oracle and have ample opportunity to upgrade to PeopleSoft Release 9.2, Oracle recently announced the following changes to Oracle’s Lifetime Support: Extended Support for PeopleSoft Release 9.0 until June 2015 Waiver of Extended Support Fees on PeopleSoft Release 9.0 Waiver of Extended Support Fees on PeopleSoft Release 9.1 The extension of Oracle Extended Support for PeopleSoft Release 9.0 applications from various months in 2014 to June 2015 is documented in the “Oracle Lifetime Support Policy” for Oracle Applications found here: http://www.oracle.com/us/support/library/lifetime-support-applications-069216.pdf The waiver of Oracle Extended Support uplift fees on PeopleSoft Release 9.0 and PeopleSoft Release 9.1 applications is documented in the “Oracle Software Technical Support Policies” found here: http://www.oracle.com/us/support/library/057419.pdf Furthermore, Oracle also recently announced Oracle Advanced Customer Support (ACS) service offerings for PeopleSoft Payroll for North America and PeopleSoft Global Payroll Release 8.9* to provide tax, legal, and regulatory updates. For more information on the Oracle Advance Customer Support (ACS) service offerings contact [email protected]. *select country extensions only including: France, Spain, Mexico, United Kingdom, India, Australia, and New Zealand

    Read the article

  • Go/Obj-C style interfaces with ability to extend compiled objects after initial release

    - by Skrylar
    I have a conceptual model for an object system which involves combining Go/Obj-C interfaces/protocols with being able to add virtual methods from any unit, not just the one which defines a class. The idea of this is to allow Ruby-ish open classes so you can take a minimalist approach to library development, and attach on small pieces of functionality as is actually needed by the whole program. Implementation of this involves a table of methods marked virtual in an RTTI table, which system functions are allowed to add to during module initialization. Upon typecasting an object to an interface, a Go-style lookup is done to create a vtable for that particular mapping and pass it off so you can have comparable performance to C/C++. In this case, methods may be added /afterwards/ which were not previously known and these new methods allow newer interfaces to be satisfied; while I like this idea because it seems like it would be very flexible (disregarding the potential for spaghetti code, which can happen with just about any model you use regardless). By wrapping the system calls for binding methods up in a set of clean C-compatible calls, one would also be able to integrate code with shared libraries and retain a decent amount of performance (Go does not do shared linking, and Objective-C does a dynamic lookup on each call.) Is there a valid use-case for this model that would make it worth the extra background plumbing? As much as this Dylan-style extensibility would be nice to have access to, I can't quite bring myself to a use case that would justify the overhead other than "it could make some kinds of code more extensible in future scenarios."

    Read the article

  • How do large blobs affect SQL delete performance, and how can I mitigate the impact?

    - by Max Pollack
    I'm currently experiencing a strange issue that my understanding of SQL Server doesn't quite mesh with. We use SQL as our file storage for our internal storage service, and our database has about half a million rows in it. Most of the files (86%) are 1mb or under, but even on fresh copies of our database where we simply populate the table with data for the purposes of a test, it appears that rows with large amounts of data stored in a BLOB frequently cause timeouts when our SQL Server is under load. My understanding of how SQL Server deletes rows is that it's a garbage collection process, i.e. the row is marked as a ghost and the row is later deleted by the ghost cleanup process after the changes are copied to the transaction log. This suggests to me that regardless of the size of the data in the blob, row deletion should be close to instantaneous. However when deleting these rows we are definitely experiencing large numbers of timeouts and astoundingly low performance. In our test data set, its files over 30mb that cause this issue. This is an edge case, we don't frequently encounter these, and even though we're looking into SQL filestream as a solution to some of our problems, we're trying to narrow down where these issues are originating from. We ARE performing our deletes inside of a transaction. We're also performing updates to metadata such as file size stats, but these exist in a separate table away from the file data itself. Hierarchy data is stored in the table that contains the file information. Really, in the end it's not so much what we're doing around the deletes that matters, we just can't find any references to low delete performance on rows that contain a large amount of data in a BLOB. We are trying to determine if this is even an avenue worth exploring, or if it has to be one of our processes around the delete that's causing the issue. Are there any situations in which this could occur? Is it common for a database server to come to the point of complete timeouts when many of these deletes are occurring simultaneously? Is there a way to combat this issue if it exists? (cross-posted from StackOverflow )

    Read the article

  • Virtualization or Raw Metal?

    - by THE
    Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-fareast-language:EN-US;} With the growing number of customers who want to run the Oracles EPM/BI (or other Fusion Middleware Software) in a virtualized environment, we face a growing number of people asking if running Oracle Software within VMware is supported or not. Two KM articles reflect Oracles policy towards the use of VMware: 249212.1 and 475484.1 . The bottom line is: “you may use it at your own risk, but Oracle does not recommend it”. So far we have seen few problems with the use of VMware (other than performance and the usual limitations) but Oracle does not certify its software for the use in VMware (and specifically for RAC Software actively refuses any support) and any issue that may occur will be fixed for the native OS only. It is on the customer to prove that the issue is NOT due to VMware in case that an issue is encountered. See: “Oracle Fusion Middleware Supported System Configurations page” And also “Supported Virtualization and Partitioning Technologies for Oracle Fusion Middleware”

    Read the article

  • Compiling for T4

    - by Darryl Gove
    I've recently had quite a few queries about compiling for T4 based systems. So it's probably a good time to review what I consider to be the best practices. Always use the latest compiler. Being in the compiler team, this is bound to be something I'd recommend But the serious points are that (a) Every release the tools get better and better, so you are going to be much more effective using the latest release (b) Every release we improve the generated code, so you will see things get better (c) Old releases cannot know about new hardware. Always use optimisation. You should use at least -O to get some amount of optimisation. -xO4 is typically even better as this will add within-file inlining. Always generate debug information, using -g. This allows the tools to attribute information to lines of source. This is particularly important when profiling an application. The default target of -xtarget=generic is often sufficient. This setting is designed to produce a binary that runs well across all supported platforms. If the binary is going to be deployed on only a subset of architectures, then it is possible to produce a binary that only uses the instructions supported on these architectures, which may lead to some performance gains. I've previously discussed which chips support which architectures, and I'd recommend that you take a look at the chart that goes with the discussion. Crossfile optimisation (-xipo) can be very useful - particularly when the hot source code is distributed across multiple source files. If you're allowed to have something as geeky as favourite compiler optimisations, then this is mine! Profile feedback (-xprofile=[collect: | use:]) will help the compiler make the best code layout decisions, and is particularly effective with crossfile optimisations. But what makes this optimisation really useful is that codes that are dominated by branch instructions don't typically improve much with "traditional" compiler optimisation, but often do respond well to being built with profile feedback. The macro flag -fast aims to provide a one-stop "give me a fast application" flag. This usually gives a best performing binary, but with a few caveats. It assumes the build platform is also the deployment platform, it enables floating point optimisations, and it makes some relatively weak assumptions about pointer aliasing. It's worth investigating. SPARC64 processor, T3, and T4 implement floating point multiply accumulate instructions. These can substantially improve floating point performance. To generate them the compiler needs the flag -fma=fused and also needs an architecture that supports the instruction (at least -xarch=sparcfmaf). The most critical advise is that anyone doing performance work should profile their application. I cannot overstate how important it is to look at where the time is going in order to determine what can be done to improve it. I also presented at Oracle OpenWorld on this topic, so it might be helpful to review those slides.

    Read the article

  • Installed 4GB memory but Windows XP 32 bit only reporting 2GB?

    - by AnthonyWJones
    I've just taken an existing XP Pro 32 bit system that had only 0.5GB of memory installed and maxed it out to 4GB. The BIOS reports the 4GB ram however when XP is booted and I look at the computer properties only 2GB of RAM is reported. Can anyone explain this? Before we go up any blind allys the /3GB switch is not the answer here, I have no need for a single process to use more the 2GB of memory. I'm wondering if the the 32 bit XP Pro is deliberately limited to 2GB. I seem to remember seeing an excellent table on a Microsoft site listing all the various SKUs of Windows and what each one was limited to. However I can't seem to find that table now.

    Read the article

  • Installed 4GB memory but Windows XP 32 bit only reporting 2GB?

    - by AnthonyWJones
    I've just taken an existing XP Pro 32 bit system that had only 0.5GB of memory installed and maxed it out to 4GB. The BIOS reports the 4GB ram however when XP is booted and I look at the computer properties only 2GB of RAM is reported. Can anyone explain this? Before we go up any blind allys the /3GB switch is not the answer here, I have no need for a single process to use more the 2GB of memory. I'm wondering if the the 32 bit XP Pro is deliberately limited to 2GB. I seem to remember seeing an excellent table on a Microsoft site listing all the various SKUs of Windows and what each one was limited to. However I can't seem to find that table now.

    Read the article

  • Sun2Oracle: Upgrading from DSEE to the next generation Oracle Unified Directory

    - by Darin Pendergraft
    Mark your calendars and register to join this webcast featuring Steve Giovanetti from Hub City Media, Albert Wu from UCLA and our own Scott Bonnell as they discuss a directory upgrade project from Sun DSEE to Oracle Unified Directory. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Date: Thursday, September 13, 2012 Time: 10:00 AM Pacific Join us for this webcast and you will: Learn from one customer that has successfully upgraded to the new platform See what technology and business drivers influenced the upgrade Hear about the benefits of OUD’s elastic scalability and unparalleled performance Get additional information and resources for planning an upgrade Register Now!

    Read the article

  • Build and migrated to software raid (mdadm) on GPT disk, now can't assemble array

    - by John H
    mdadm, gpt issues, unrecognized partitions. Simplified question: How do I get mdadm to recognize GPT partitions? I have been attempting to convert/copy my Ubuntu 11.10 OS from a single drive to software raid 1. I have done similar in the past, but in this case, I was adding in a drive that has been configured for GPT and I tried to work with that without fully looking into the implications. Currently, I have a non-booting mdadm RAID 1 array of /dev/md127 (the OS assigned that and it keeps picking up). I am booting off of live USB keys, currently System Rescue CD from sysresccd. While gdisk and parted can see all the partitions, most of the OS utilities do not, including mdadm. My main goal is just to make the raid array accessible so I can get pull the data and start fresh (without using GPT). /dev/md127 /dev/sda /dev/sda1 <- GPT type partition /dev/sda1 <- exists within the GPT part, member of md127 /dev/sda2 <- exists within the GPT part, empty /dev/sdb /dev/sdb1 <- GPT type partition /dev/sdb1 <- exists within the GPT part, member of md127 History: POINT A: The original OS was install on sda (actually /dev/sda6). I used a the Ubuntu live usb to add sdb. I got warning from fdisk about GPT so I used gdisk to create a raid partition (sdb1) and mdadm to create a raid1 mirror with a missing drive. I had many issues getting this working (including being unable to get grub to install) but I eventually got it to boot using grub on sda and /dev/md127 off of sdb. So at point A, I had copied my OS from sda6 to md127 on sdb. I then booted into a rescue mode and attempted to get a bootloader onto sdb, which failed. I then discovered my mistake: I had installed the raid onto sdb instead of sdb1, essentially overwriting the sdb1 partition. POINT B: I now had two copies of my data- one on md127/sdb, and one on sda. I destroyed data on sda and created a new GPT table on sda. I then created sda1 for the raid array, and sda2 for a scratch partition. I added sda1 into the raid array and let it rebuild. md127 now covered /dev/sdb and /dev/sda1 as fully active and synced. POINT C: I rebooted onto linux rescue again and was still able to access the raid array. I then removed /dev/sdb from the array and created /dev/sdb1 for the raid. I added sdb1 to the array and let it sync. I was able to mount and access /dev/md127 without issues. Once it completed, both /dev/sda1 and /dev/sdb1 were GPT partitions and actively syncing. POINT D (current): I rebooted again to test if the array would boot and grub failed to load. I booted off of my live thumb drive and found that I can no longer assemble the raid array. mdadm doesn't see the required partitions. -- root@freshdesk /root % uname -a Linux freshdesk 3.0.24-std251-amd64 #2 SMP Sat Mar 17 12:08:55 UTC 2012 x86_64 AMD Athlon(tm) II X4 645 Processor AuthenticAMD GNU/Linux === /proc/partitions and parted look good: root@freshdesk /root % cat /proc/partitions major minor #blocks name 7 0 301788 loop0 8 0 976762584 sda 8 1 732579840 sda1 8 2 244181703 sda2 8 16 732574584 sdb 8 17 732573543 sdb1 8 32 7876607 sdc 8 33 7873349 sdc1 (parted) print all Model: ATA ST31000528AS (scsi) Disk /dev/sda: 1000GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 750GB 750GB ext4 2 750GB 1000GB 250GB Linux/Windows data Model: ATA SAMSUNG HD753LJ (scsi) Disk /dev/sdb: 750GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 750GB 750GB ext4 Linux RAID raid Model: SanDisk SanDisk Cruzer (scsi) Disk /dev/sdc: 8066MB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 31.7kB 8062MB 8062MB primary fat32 boot, lba === # no sda2, and I double the sdb1 is the one shown in parted root@freshdesk /root % blkid /dev/loop0: TYPE="squashfs" /dev/sda1: UUID="75dd6c2d-f0a8-4302-9da4-792cc7d72355" TYPE="ext4" /dev/sdc1: LABEL="PENDRIVE" UUID="1102-3720" TYPE="vfat" /dev/sdb1: UUID="2dd89f15-65bb-ff88-e368-bf24bd0fce41" TYPE="linux_raid_member" root@freshdesk /root % mdadm -E /dev/sda1 mdadm: No md superblock detected on /dev/sda1. # this is probably a result of me attempting to force the array up, putting superblocks on the GPT partition root@freshdesk /root % mdadm -E /dev/sdb1 /dev/sdb1: Magic : a92b4efc Version : 0.90.00 UUID : 2dd89f15:65bbff88:e368bf24:bd0fce41 Creation Time : Fri Mar 30 19:25:30 2012 Raid Level : raid1 Used Dev Size : 732568320 (698.63 GiB 750.15 GB) Array Size : 732568320 (698.63 GiB 750.15 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 127 Update Time : Sat Mar 31 12:39:38 2012 State : clean Active Devices : 1 Working Devices : 2 Failed Devices : 1 Spare Devices : 1 Checksum : a7d038b3 - correct Events : 20195 Number Major Minor RaidDevice State this 2 8 17 2 spare /dev/sdb1 0 0 8 1 0 active sync /dev/sda1 1 1 0 0 1 faulty removed 2 2 8 17 2 spare /dev/sdb1 === root@freshdesk /root % mdadm -A /dev/md127 /dev/sda1 /dev/sdb1 mdadm: no recogniseable superblock on /dev/sda1 mdadm: /dev/sda1 has no superblock - assembly aborted root@freshdesk /root % mdadm -A /dev/md127 /dev/sdb1 mdadm: cannot open device /dev/sdb1: Device or resource busy mdadm: /dev/sdb1 has no superblock - assembly aborted

    Read the article

  • (Unity)Getting a mirrored mesh from my data structure

    - by Steve
    Here's the background: I'm in the beginning stages of an RTS game in Unity. I have a procedurally generated terrain with a perlin-noise height map, as well as a function to generate a river. The problem is that the graphical creation of the map is taking the data structure of the map and rotating it by 180 degrees. I noticed this problem when i was creating my rivers. I would set the River's height to flat, and noticed that the actual tiles that were flat in the graphical representation were flipped and mirrored. Here's 3 screenshots of the map from different angles: http://imgur.com/a/VLHHq As you can see, if you flipped (graphically) the river by 180 degrees on the z axis, it would fit where the terrain is flattened. I have a suspicion it is being caused by a misunderstanding on my part of how vertices work. Alas, here is a snippet of the code that is used: This code here creates a new array of Tile objects, which hold the information for each tile, including its type, coordinate, height, and it's 4 vertices public DTileMap (int size_x, int size_y) { this.size_x = size_x; this.size_y = size_y; //Initialize Map_Data Array of Tile Objects map_data = new Tile[size_x, size_y]; for (int j = 0; j < size_y; j++) { for (int i = 0; i < size_x; i++) { map_data [i, j] = new Tile (); map_data[i,j].coordinate.x = (int)i; map_data[i,j].coordinate.y = (int)j; map_data[i,j].vertices[0] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -j * GTileMap.TileMap.tileSize); map_data[i,j].vertices[1] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[2] = new Vector3 (i * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); map_data[i,j].vertices[3] = new Vector3 ((i+1) * GTileMap.TileMap.tileSize, map_data[i,j].Height, -(j-1) * GTileMap.TileMap.tileSize); } } This code sets the river tiles to height 0 foreach (Tile t in map_data) { if (t.realType == "Water") { t.vertices[0].y = 0f; t.vertices[1].y = 0f; t.vertices[2].y = 0f; t.vertices[3].y = 0f; } } And below is the code to generate the actual graphics from the data: public void BuildMesh () { DTileMap.DTileMap map = new DTileMap.DTileMap (size_x, size_z); int numTiles = size_x * size_z; int numTris = numTiles * 2; int vsize_x = size_x + 1; int vsize_z = size_z + 1; int numVerts = vsize_x * vsize_z; // Generate the mesh data Vector3[] vertices = new Vector3[ numVerts ]; Vector3[] normals = new Vector3[numVerts]; Vector2[] uv = new Vector2[numVerts]; int[] triangles = new int[ numTris * 3 ]; int x, z; for (z=0; z < vsize_z; z++) { for (x=0; x < vsize_x; x++) { normals [z * vsize_x + x] = Vector3.up; uv [z * vsize_x + x] = new Vector2 ((float)x / size_x, 1f - (float)z / size_z); } } for (z=0; z < vsize_z; z+=1) { for (x=0; x < vsize_x; x+=1) { if (x == vsize_x - 1 && z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z - 1].vertices [3]; } else if (z == vsize_z - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z - 1].vertices [2]; } else if (x == vsize_x - 1) { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x - 1, z].vertices [1]; } else { vertices [z * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [0]; vertices [z * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [1]; vertices [(z+1) * vsize_x + x] = DTileMap.DTileMap.map_data [x, z].vertices [2]; vertices [(z+1) * vsize_x + x+1] = DTileMap.DTileMap.map_data [x, z].vertices [3]; } } } } for (z=0; z < size_z; z++) { for (x=0; x < size_x; x++) { int squareIndex = z * size_x + x; int triOffset = squareIndex * 6; triangles [triOffset + 0] = z * vsize_x + x + 0; triangles [triOffset + 2] = z * vsize_x + x + vsize_x + 0; triangles [triOffset + 1] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 3] = z * vsize_x + x + 0; triangles [triOffset + 5] = z * vsize_x + x + vsize_x + 1; triangles [triOffset + 4] = z * vsize_x + x + 1; } } // Create a new Mesh and populate with the data Mesh mesh = new Mesh (); mesh.vertices = vertices; mesh.triangles = triangles; mesh.normals = normals; mesh.uv = uv; // Assign our mesh to our filter/renderer/collider MeshFilter mesh_filter = GetComponent<MeshFilter> (); MeshCollider mesh_collider = GetComponent<MeshCollider> (); mesh_filter.mesh = mesh; mesh_collider.sharedMesh = mesh; calculateMeshTangents (mesh); BuildTexture (map); } If this looks familiar to you, its because i got most of it from Quill18. I've been slowly adapting it for my uses. And please include any suggestions you have for my code. I'm still in the very early prototyping stage.

    Read the article

  • Council for the Development of a jumping game!

    - by Esteban Quintero
    I want to create platforms on the stage, where a sprite is jumping on them. something like this http://itunes.apple.com/es/app/doodle-jump-cuidado-extremadamente/id307727765?mt=8 I would like some guidance to do so. 1) what is the best way to simulate the jump? (Velocity or EnityModifier) 2) as platforms rabdom I can generate, with no overlap? 3) should move the camera or the sprites of the stage?

    Read the article

< Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >