Search Results

Search found 1807 results on 73 pages for 'levels'.

Page 10/73 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • How to configure sudoers with path wildcards?

    - by C. Lee
    I need sudo for a command for any path under a particular area. Example: sudo mycommand /opt/apps/myapp/... What is the sudoers syntax to allow this command to run in any path that falls under /opt/apps/myapp? This is Solaris 10 sudo. Thank you for your reply, but I don't need wildcards for the path to the commands, but wildcards for the arguments for the commands. For example, we want to do something like... sudo mycmd /opt/userarea/area1 sudo mycmd /opt/userarea/area1/area2 sudo mycmd /opt/userarea/area1/area2/area3 So far, using wildcards for the arguments in sudoers look like this: /opt/userarea/* /opt/userarea/*/* And it seems like if we want to have N levels of directories, then we need N lines in sudoers! Is there a better way to include all N levels in one line in sudoers? Thanks.

    Read the article

  • How do I add an Approver to SharePoint 2010?

    - by CompGeekess
    I am still new to SharePoint and am learning so much, but have came in to a few hic-ups and here is one. I want to add an approver to SharePoint 2010 who has FULL CONTROL. My manager requested that I find out where the approval request are going and redirect them to him. (I have no idea where or how to find this out). Is this possible to do on the Central Administration or must I go into each site/subsite and set him to be the approver this way? Googled and the site was showing me how to approve workflows or how to create approvals, my other resources didn't give much help either. So far I had gone into a few individual sites and set my manager and I up as approvers with full control, but am uncertain if this is the correct procedure or if there is a better way to do this. For example, have the lower levels inherit from the higher level - set security at the highest level and cascade to the child levels. Thank You.

    Read the article

  • Determining percentage of students between certain grades

    - by dunc
    I have an Excel spreadsheet with the following data: #-----------------------------------------------------------------------------------------------------------------------------------# # Student # KS2 Grade # Target # Expected 1 # Expected 2 # Expected 3 # FSM Status # Gifted & Talented # #-----------------------------------------------------------------------------------------------------------------------------------# # User 1 # 4 # 6 # 7 # 5 # 6 # Y # N # # User 2 # 3 # 5 # 5 # 4 # 4 # N # N # # User 3 # 5 # 6 # 6 # 6 # 7 # N # N # # User 4 # 4 # 6 # 5 # 6 # 6 # N # Y # # User 5 # 5 # 7 # 7 # 6 # 7 # N # N # # User 6 # 3 # 4 # 4 # 4 # 4 # N # N # # User 7 # 3 # 4 # 5 # 3 # 4 # Y # Y # #-----------------------------------------------------------------------------------------------------------------------------------# What I'd like to do is determine the percentage of students with certain levels, i.e. a range of levels. For instance, in the data above, I'd like to determine the % of all students that have a Target level of 5 - 7. I'd then like to also expand the formula to specify % of Gifted & Talented students with a Target level of 5 - 7. Is this possible in Excel? If so, where do I start?

    Read the article

  • Chennai Metro Water [Indian Websites]

    - by samsudeen
    If you are living in Chennai definitely you will have this question in your mind “Does Chennai 0survive this summer without any water crisis?”. The Chennai Metro Water maintains a website where in you can know all the details about the water storage, supply. This site has all the information about water supply  especially matters for  Chennai residents. I have given some of the uses of this site Water Sewer Application Status If you are a new resident and applied for water sewer application status. You can check your application (Water Sewer Application Status) by entering your WA No (Water Application No). You can also download all the required application forms. Register Complaints You can register complaints (Register Complaints) about  water supply / defective water meter and sewage blocking online. Pay Water & Sewage Tax You can pay your water & sewage tax (Pay Tax) online including the dues /arrears if any left out. Lake Level This site also maintains the current (Lake Level) and historic water levels of the Chennai water reservoirs including the daily inflow /out flow of water. You can also know the daily / month rainfall levels of these water reservoirs from 1965 onwards. Hope this  information little  useful for those who are planning / building a new house in Chennai corporation. Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • SQL SERVER – Identify Most Resource Intensive Queries – SQL in Sixty Seconds #029 – Video

    - by pinaldave
    There are a few questions I often get asked. I wonder how interesting is that in our daily life all of us have to often need the same kind of information at the same time. Here is the example of the similar questions: How many user created tables are there in the database? How many non clustered indexes each of the tables in the database have? Is table Heap or has clustered index on it? How many rows each of the tables is contained in the database? I finally wrote down a very quick script (in less than sixty seconds when I originally wrote it) which can answer above questions. I also created a very quick video to explain the results and how to execute the script. Here is the complete script which I have used in the SQL in Sixty Seconds Video. SELECT [schema_name] = s.name, table_name = o.name, MAX(i1.type_desc) ClusteredIndexorHeap, COUNT(i.TYPE) NoOfNonClusteredIndex, p.rows FROM sys.indexes i INNER JOIN sys.objects o ON i.[object_id] = o.[object_id] INNER JOIN sys.schemas s ON o.[schema_id] = s.[schema_id] LEFT JOIN sys.partitions p ON p.OBJECT_ID = o.OBJECT_ID AND p.index_id IN (0,1) LEFT JOIN sys.indexes i1 ON i.OBJECT_ID = i1.OBJECT_ID AND i1.TYPE IN (0,1) WHERE o.TYPE IN ('U') AND i.TYPE = 2 GROUP BY s.name, o.name, p.rows ORDER BY schema_name, table_name Related Tips in SQL in Sixty Seconds: Find Row Count in Table – Find Largest Table in Database Find Row Count in Table – Find Largest Table in Database – T-SQL Identify Numbers of Non Clustered Index on Tables for Entire Database Index Levels, Page Count, Record Count and DMV – sys.dm_db_index_physical_stats Index Levels and Delete Operations – Page Level Observation What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • "Are You There?".. India Tops Logistics List of Emerging Nations

    - by [email protected]
    It's just amazing how far, wide and deep modern supply chains are extending. AMR reported on 15 Apr (M.Burkett, A.Reese) in a SCM webcast that 'Penetrating Emerging Markets" was the top priotiy for organizations based on a recent survey. I took this as both adding new consumers to their prospect-list as well as leveraging 'lower cost labor arbitrage". (Read '3 Billion Capitalists") Supply Chain Quarterly reports that India and Brazil received the highest ranking of the logistics markets in developing nations India tops the list of emerging nations that scores the attractiveness of logistics markets to foreign investors. Developed by the UK-based research firm Transport Intelligence, the new  Emerging Market Logistics Index rated 38 developing countries on 3 factors. 1. "Market size and growth attractiveness," considered a country's economic output, projected growth rate, and population size.  2. "Market compatibility," which examined how well-matched a nation was with the services offered by global logistics providers. This includes a country's security levels, market accessibility, foreign direct investment, distribution of wealth and population, and development of its service sector. 3. "Connectedness," which rated the efficiency of customs and border controls, liner shipping connections, and transportation infrastructure. India claimed the top spot due to its market size and growth prospects. Brazil is second because of its economic performance, good levels of market accessibility, and improving domestic and international transport connections. Are you there? For more information see www.transportintelligence.com/articles_papers. The top 10 emerging countries India Brazil Indonesia Mexico Russia Turkey United Arab Emirates Egypt Saudi Arabia Malaysia Source: Transport Intelligence, The Emerging Markets Logistics Index, March 2010

    Read the article

  • Instructor Insight: Using the Container Database in Oracle Database 12 c

    - by Breanne Cooley
    The first time I examined the Oracle Database 12c architecture, I wasn’t quite sure what I thought about the Container Database (CDB). In the current release of the Oracle RDBMS, the administrator now has a choice of whether or not to employ a CDB. Bundling Databases Inside One Container In today’s IT industry, consolidation is a common challenge. With potentially hundreds of databases to manage and maintain, an administrator will require a great deal of time and resources to upgrade and patch software. Why not consider deploying a container database to streamline this activity? By “bundling” several databases together inside one container, in the form of a pluggable database, we can save on overhead process resources and CPU time. Furthermore, we can reduce the human effort required for periodically patching and maintaining the software. Minimizing Storage Most IT professionals understand the concept of storage, as in solid state or non-rotating. Let’s take one-to-many databases and “plug” them into ONE designated container database. We can minimize many redundant pieces that would otherwise require separate storage and architecture, as was the case in previous releases of the Oracle RDBMS. The data dictionary can be housed and shared in one CDB, with individual metadata content for each pluggable database. We also won’t need as many background processes either, thus reducing the overhead cost of the CPU resource. Improve Security Levels within Each Pluggable Database  We can now segregate the CDB-administrator role from that of the pluggable-database administrator as well, achieving improved security levels within each pluggable database and within the CDB. And if the administrator chooses to use the non-CDB architecture, everything is backwards compatible, too.  The bottom line: it's a good idea to at least consider using a CDB. -Christopher Andrews, Senior Principal Instructor, Oracle University

    Read the article

  • Case Study: Polystar Improves Telecom Networks Performance with Embedded MySQL

    - by Bertrand Matthelié
    Polystar delivers and supports systems that increase the quality, revenue and customer satisfaction of telecommunication services. Headquarted in Sweden, Polystar helps operators worldwide including Telia, Tele2, Telekom Malysia and T-Mobile to monitor their network performance and improve service levels. Challenges Deliver complete turnkey solutions to customers integrating a database ensuring high performance at scale, while being very easy to use, manage and optimize. Enable the implementation of distributed architectures including one database per server while maintaining a low Total Cost of Ownership (TCO). Avoid growing database complexity as the volume of mobile data to monitor and analyze drastically increases. Solution Evaluation of several databases and selection of MySQL based on its high performance, manageability, and low TCO. The MySQL databases implemented within the Polystar solutions handle on average 3,000 to 5,000 transactions per second. Up to 50 million records are inserted every day in each database. Typical installations include between 50 and 100 MySQL databases, up to 300 for the largest ones. Data is then periodically aggregated, with the original records being overwritten, as the need for detailed information becomes unnecessary to operators after a few weeks. The exponential growth in mobile data traffic driven by the proliferation of smartphones and usage of social media requires ever more powerful solutions to monitor, analyze and turn network data into actionable business intelligence. With MySQL, Polystar can deliver powerful, yet easy to manage, solutions to its customers. MySQL-based Polystar solutions enable operators to monitor, manage and improve the service levels of their telecom networks in over a dozen countries from a single location. The new and innovative MySQL features constantly delivered by Oracle help ensure Polystar that it will be able to meet its customer’s needs as they evolve. “MySQL has been a great embedded database choice for us. It delivers the high performance we need while remaining very easy to use, manage and tune. Power and simplicity at its best.” Mats Söderlindh, COO at Polystar.

    Read the article

  • Making an AI walk on a NavigationMesh (2D/Top-Down game)

    - by Lennard Fonteijn
    For some time I have been working on a framework which should make it possible to generate 2D levels based on a set of rules specified by level designers. You can read more about it here as I won't go into details: http://www.jorisdormans.nl/article.php?ref=engineering_emergence Anyway, I'm now at the point of putting the framework to use and have trouble coming up with a solution for AI. I decided to implement a NavigationMesh in the generated levels as I already have that information to start with. Consider the following image (borrowed from http://www.david-gouveia.com/pathfinding-on-a-2d-polygonal-map/): When I run A* on the NavigationMesh, the red path would be suggested when I want to go from point A to B (either direction). However, I don't want my AI to walk that path directly and clipping corners, I'd rather want them to follow the more logical black path. How would I go about going from the Red path to the Black path, are there any algorithms for this. Which steps do I take? Is A* the proper solution for this at all? For some additional information: The proof-of-concept game is a 2D top-down game written in C#, but examples/references in any language are welcome!

    Read the article

  • Optimizing hash lookup & memory performance in Go

    - by Moishe
    As an exercise, I'm implementing HashLife in Go. In brief, HashLife works by memoizing nodes in a quadtree so that once a given node's value in the future has been calculated, it can just be looked up instead of being re-calculated. So eg. if you have a node at the 8x8 level, you remember it by its four children (each at the 2x2 level). So next time you see an 8x8 node, when you calculate the next generation, you first check if you've already seen a node with those same four children. This is extended up through all levels of the quadtree, which gives you some pretty amazing optimizations if eg. you're 10 levels above the leaves. Unsurprisingly, it looks like the perfmance crux of this is the lookup of nodes by child-node values. Currently I have a hashmap of {&upper_left_node,&upper_right_node,&lower_left_node,&lower_right_node} -> node So my lookup function is this: func FindNode(ul, ur, ll, lr *Node) *Node { var node *Node var ok bool nc := NodeChildren{ul, ur, ll, lr} node, ok = NodeMap[nc] if ok { return node } node = &Node{ul, ur, ll, lr, 0, ul.Level + 1, nil} NodeMap[nc] = node return node } What I'm trying to figure out is if the "nc := NodeChildren..." line causes a memory allocation each time the function is called. If it does, can I/should I move the declaration to the global scope and just modify the values each time this function is called? Or is there a more efficient way to do this? Any advice/feedback would be welcome. (even coding style nits; this is literally the first thing I've written in Go so I'd love any feedback)

    Read the article

  • Alternatives to a leveling system

    - by Bane
    I'm currently designing a rough prototype of a mecha fighting game. These are the basics I came up with: Multiplayer (matchmaking for up to 10 people, for now) Browser based (HTML5) 2D (<canvas>) Persistent (as in, players have accounts and don't have to use a new mech each time they start a match) Players earn money upon destroying another mech, which is used to buy parts (guns, armor, boosters, etc) Simplicity (both of the game itself, and of the development of said game) No "leveling" (as in, players don't get awarded with XP) The last part is bothering me. At first, I wanted to have players gain experience points (XP) when destroying other mechs, but gaining two things at once (money and XP) seemed to be in conflict with my last point, which is simplicity. If I were to have a leveling system, that would require additional development. But, the biggest problem is that I simply couldn't fit it anywhere! Adding levels would require adding meaning to these levels, and most of the things that I hoped to achieve could already be achieved with the money mechanic I introduced. So I decided to drop leveling off completely. That, in turn, removed a fairly popular and robust mean of progression in games from my game (not that I would use it well anyway). Is there another way of progression in games, aside from leveling and XP points, that wouldn't get rendered redundant by my money mechanic, would be somehow meaningful (even on a symbolic level), and wouldn't be in conflict with my last point, which is simplicity?

    Read the article

  • LibGdx efficient data saving/loading?

    - by grimrader22
    Currently, my LibGDX game consists of a 512 x 512 map of Tiles and entities such as players and monsters. I am wondering how to efficiently save and load the data of my levels. At the moment I am using JSON serialization for each class I want to save. I implement the Json.Serializable interface for all of these classes and write only the variables that are necessary. So my map consists of 512 x 512 tiles, that's 260,000 tiles. Each tile on the map consists of a Tile object, which points to some final Tile object like a GRASS_TILE or a STONE_TILE. When I serialize each level tile, the final Tile that it points to is re-serialized over and over again, so if I have 100 Tiles all pointing to GRASS_TILE, the data of GRASS_TILE is written 100 times over. When I go to load/deserialize my objects, 100 GrassTile objects are created, but they are each their own object. They no longer point to the final tile object. I feel like this reading/writing files very slow. If I were to abandon JSON serialization, to my knowledge my next best option would be saving the level data to a sql database. Unless there is a way to speed up serializing/deserializing 260,000 tiles I may have to do this. Is this a good idea? Could I really write that many tiles to the database efficiently? To sum all this up, I am trying to save my levels using JSON serialization, but it is VERY slow. What other options do I have for saving the data of so many tiles. I also must note that the JSON serialization is not slow on a PC, it is only VERY slow on a mobile device. Since file writing/reading is so slow on mobile devices, what can I do?

    Read the article

  • EM12c Release 4: Database as a Service Enhancements

    - by Adeesh Fulay
    Oracle Enterprise Manager 12.1.0.4 (or simply put EM12c R4) is the latest update to the product. As previous versions, this release provides tons of enhancements and bug fixes, attributing to improved stability and quality. One of the areas that is most exciting and has seen tremendous growth in the last few years is that of Database as a Service. EM12c R4 provides a significant update to Database as a Service. The key themes are: Comprehensive Database Service Catalog (includes single instance, RAC, and Data Guard) Additional Storage Options for Snap Clone (includes support for Database feature CloneDB) Improved Rapid Start Kits Extensible Metering and Chargeback Miscellaneous Enhancements 1. Comprehensive Database Service Catalog Before we get deep into implementation of a service catalog, lets first understand what it is and what benefits it provides. Per ITIL, a service catalog is an exhaustive list of IT services that an organization provides or offers to its employees or customers. Service catalogs have been widely popular in the space of cloud computing, primarily as the medium to provide standardized and pre-approved service definitions. There is already some good collateral out there that talks about Oracle database service catalogs. The two whitepapers i recommend reading are: Service Catalogs: Defining Standardized Database Service High Availability Best Practices for Database Consolidation: The Foundation for Database as a Service [Oracle MAA] EM12c comes with an out-of-the-box service catalog and self service portal since release 1. For the customers, it provides the following benefits: Present a collection of standardized database service definitions, Define standardized pools of hardware and software for provisioning, Role based access to cater to different class of users, Automated procedures to provision the predefined database definitions, Setup chargeback plans based on service tiers and database configuration sizes, etc Starting Release 4, the scope of services offered via the service catalog has been expanded to include databases with varying levels of availability - Single Instance (SI) or Real Application Clusters (RAC) databases with multiple data guard based standby databases. Some salient points of the data guard integration: Standby pools can now be defined across different datacenters or within the same datacenter as the primary (this helps in modelling the concept of near and far DR sites) The standby databases can be single instance, RAC, or RAC One Node databases Multiple standby databases can be provisioned, where the maximum limit is determined by the version of database software The standby databases can be in either mount or read only (requires active data guard option) mode All database versions 10g to 12c supported (as certified with EM 12c) All 3 protection modes can be used - Maximum availability, performance, security Log apply can be set to sync or async along with the required apply lag The different service levels or service tiers are popularly represented using metals - Platinum, Gold, Silver, Bronze, and so on. The Oracle MAA whitepaper (referenced above) calls out the various service tiers as defined by Oracle's best practices, but customers can choose any logical combinations from the table below:  Primary  Standby [1 or more]  EM 12cR4  SI  -  SI  SI  RAC -  RAC SI  RAC RAC  RON -  RON RON where RON = RAC One Node is supported via custom post-scripts in the service template A sample service catalog would look like the image below. Here we have defined 4 service levels, which have been deployed across 2 data centers, and have 3 standardized sizes. Again, it is important to note that this is just an example to get the creative juices flowing. I imagine each customer would come up with their own catalog based on the application requirements, their RTO/RPO goals, and the product licenses they own. In the screenwatch titled 'Build Service Catalog using EM12c DBaaS', I walk through the complete steps required to setup this sample service catalog in EM12c. 2. Additional Storage Options for Snap Clone In my previous blog posts, i have described the snap clone feature in detail. Essentially, it provides a storage agnostic, self service, rapid, and space efficient approach to solving your data cloning problems. The net benefit is that you get incredible amounts of storage savings (on average 90%) all while cloning databases in a matter of minutes. Space and Time, two things enterprises would love to save on. This feature has been designed with the goal of providing data cloning capabilities while protecting your existing investments in server, storage, and software. With this in mind, we have pursued with the dual solution approach of Hardware and Software. In the hardware approach, we connect directly to your storage appliances and perform all low level actions required to rapidly clone your databases. While in the software approach, we use an intermediate software layer to talk to any storage vendor or any storage configuration to perform the same low level actions. Thus delivering the benefits of database thin cloning, without requiring you to drastically changing the infrastructure or IT's operating style. In release 4, we expand the scope of options supported by snap clone with the addition of database CloneDB. While CloneDB is not a new feature, it was first introduced in 11.2.0.2 patchset, it has over the years become more stable and mature. CloneDB leverages a combination of Direct NFS (or dNFS) feature of the database, RMAN image copies, sparse files, and copy-on-write technology to create thin clones of databases from existing backups in a matter of minutes. It essentially has all the traits that we want to present to our customers via the snap clone feature. For more information on cloneDB, i highly recommend reading the following sources: Blog by Tim Hall: Direct NFS (DNFS) CloneDB in Oracle Database 11g Release 2 Oracle OpenWorld Presentation by Cern: Efficient Database Cloning using Direct NFS and CloneDB The advantages of the new CloneDB integration with EM12c Snap Clone are: Space and time savings Ease of setup - no additional software is required other than the Oracle database binary Works on all platforms Reduce the dependence on storage administrators Cloning process fully orchestrated by EM12c, and delivered to developers/DBAs/QA Testers via the self service portal Uses dNFS to delivers better performance, availability, and scalability over kernel NFS Complete lifecycle of the clones managed by EM12c - performance, configuration, etc 3. Improved Rapid Start Kits DBaaS deployments tend to be complex and its setup requires a series of steps. These steps are typically performed across different users and different UIs. The Rapid Start Kit provides a single command solution to setup Database as a Service (DBaaS) and Pluggable Database as a Service (PDBaaS). One command creates all the Cloud artifacts like Roles, Administrators, Credentials, Database Profiles, PaaS Infrastructure Zone, Database Pools and Service Templates. Once the Rapid Start Kit has been successfully executed, requests can be made to provision databases and PDBs from the self service portal. Rapid start kit can create complex topologies involving multiple zones, pools and service templates. It also supports standby databases and use of RMAN image backups. The Rapid Start Kit in reality is a simple emcli script which takes a bunch of xml files as input and executes the complete automation in a matter of seconds. On a full rack Exadata, it took only 40 seconds to setup PDBaaS end-to-end. This kit works for both Oracle's engineered systems like Exadata, SuperCluster, etc and also on commodity hardware. One can draw parallel to the Exadata One Command script, which again takes a bunch of inputs from the administrators and then runs a simple script that configures everything from network to provisioning the DB software. Steps to use the kit: The kit can be found under the SSA plug-in directory on the OMS: EM_BASE/oracle/MW/plugins/oracle.sysman.ssa.oms.plugin_12.1.0.8.0/dbaas/setup It can be run from this default location or from any server which has emcli client installed For most scenarios, you would use the script dbaas/setup/database_cloud_setup.py For Exadata, special integration is provided to reduce the number of inputs even further. The script to use for this scenario would be dbaas/setup/exadata_cloud_setup.py The database_cloud_setup.py script takes two inputs: Cloud boundary xml: This file defines the cloud topology in terms of the zones and pools along with host names, oracle home locations or container database names that would be used as infrastructure for provisioning database services. This file is optional in case of Exadata, as the boundary is well know via the Exadata system target available in EM. Input xml: This file captures inputs for users, roles, profiles, service templates, etc. Essentially, all inputs required to define the DB services and other settings of the self service portal. Once all the xml files have been prepared, invoke the script as follows for PDBaaS: emcli @database_cloud_setup.py -pdbaas -cloud_boundary=/tmp/my_boundary.xml -cloud_input=/tmp/pdb_inputs.xml          The script will prompt for passwords a few times for key users like sysman, cloud admin, SSA admin, etc. Once complete, you can simply log into EM as the self service user and request for databases from the portal. More information available in the Rapid Start Kit chapter in Cloud Administration Guide.  4. Extensible Metering and Chargeback  Last but not the least, Metering and Chargeback in release 4 has been made extensible in all possible regards. The new extensibility features allow customer, partners, system integrators, etc to : Extend chargeback to any target type managed in EM Promote any metric in EM as a chargeback entity Extend list of charge items via metric or configuration extensions Model abstract entities like no. of backup requests, job executions, support requests, etc  A slew of emcli verbs have also been added that allows administrators to create, edit, delete, import/export charge plans, and assign cost centers all via the command line. More information available in the Chargeback API chapter in Cloud Administration Guide. 5. Miscellaneous Enhancements There are other miscellaneous, yet important, enhancements that are worth a mention. These mostly have been asked by customers like you. These are: Custom naming of DB Services Self service users can provide custom names for DB SID, DB service, schemas, and tablespaces Every custom name is validated for uniqueness in EM 'Create like' of Service Templates Now creating variants of a service template is only a click away. This would be vital when you publish service templates to represent different database sizes or service levels. Profile viewer View the details of a profile like datafile, control files, snapshot ids, export/import files, etc prior to its selection in the service template Cleanup automation - for failed and successful requests Single emcli command to cleanup all remnant artifacts of a failed request Cleanup can be performed on a per request bases or by the entire pool As an extension, you can also delete successful requests Improved delete user workflow Allows administrators to reassign cloud resources to another user or delete all of them Support for multiple tablespaces for schema as a service In addition to multiple schemas, user can also specify multiple tablespaces per request I hope this was a good introduction to the new Database as a Service enhancements in EM12c R4. I encourage you to explore many of these new and existing features and give us feedback. Good luck! References: Cloud Management Page on OTN Cloud Administration Guide [Documentation] -- Adeesh Fulay (@adeeshf)

    Read the article

  • Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET

    - by Eric Z Goodnight
    Brightness and Contrast tools are for beginners! Ever wondered what graphics programs offer advanced users to ensure their photographs have a great value range? Read on to learn about Levels, Curves, and Histograms in three major programs. Curves and Levels are not as intuitive as the more basic Brightness and Contrast sliders Photoshop, GIMP, and Paint.NET all share. However, they offer a great deal more control over images that professionals and skilled image editors will demand. Combine these tools with a knowledge of how basic histograms work, and you’ll be well on your way to editing contrast like a pro! Latest Features How-To Geek ETC Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions Add a “Textmate Style” Lightweight Text Editor with Dropbox Syncing to Chrome and Iron Is the Forcefield Really On or Not? [Star Wars Parody Video] Google Updates Picasa Web Albums; Emphasis on Sharing and Showcasing Uwall.tv Turns YouTube into a Video Jukebox Early Morning Sunrise at the Beach Wallpaper Data Networks Visualized via Light Paintings [Video]

    Read the article

  • Game Review: Monument Valley

    Once again, it was a tweet that caught my attention... and the official description on the Play Store sounds good, too. "In Monument Valley you will manipulate impossible architecture and guide a silent princess through a stunningly beautiful world. Monument Valley is a surreal exploration through fantastical architecture and impossible geometry. Guide the silent princess Ida through mysterious monuments, uncovering hidden paths, unfolding optical illusions and outsmarting the enigmatic Crow People." So, let's check it out. What an interesting puzzle game Once again, I left some review on the Play Store: "Beautiful but short distraction Woohoo, what a great story behind the game. Using optical illusions and impossible geometries in this fantastic adventure of the silent princess just puts all the pieces perfectly together. Walking the amazing paths in the various levels and solving the riddles gives some decent hours of distraction but in the end you might have the urge to do more..." I can't remember exactly when and who tweeted about the game but honestly it caught my attention based on the simplicity of the design and the aspect that it seems to be an isometric design. The game relies heavily on optical illusions in order to guide to the silent princess Ida through her illusory adventure of impossible architecture and forgiveness. The game is set like a clockwork and you are turning, flipping and switching elements on the paths between the doors. Unfortunately, there aren't many levels and the game play lasted only some hours. Maybe there are more astonishing looking realms and interesting gimmicks in future versions. Play Store: Monument Valley Also, check out the latest game updates on the official web site of ustwo BTW, the game is also available on the Apple App Store and on Amazon Store for the Kindle Fire.

    Read the article

  • Payroll Customers Must Apply Mandatory Patches to Maintain Your Supportability

    - by DanaD
    The HRMS Suite of products has minimum required Rollup Patch (RUP) levels as well as additional mandatory patches that our customers must apply to ensure they are in compliance for support.  Without these patches, customers risk not being able to apply any fixes for issues they encounter as these RUPs and mandatory patches are the minimum patch level expected by Oracle Support and Oracle Development.  Core Payroll and International Payroll customers must apply the yearly Rollup Patch within 12 months of its issue. Legislative Payroll customers have additional requirements for the Rollup Patch, as the RUP generally is a pre-requisite for the next Year End/Fourth Quarter/Year Begin payroll processing supported by Oracle. These minimum RUP patches and other mandatory patches for your product or legislation are created with the following goals in mind: Compliance: Manage the people in your organization within the requirements of a specific country. Supportability: Ensure you are on a common code base so that if problems are identified, patches can be readily provided to you. Reliability: Reliable code with multiple customer downloads and comprehensive testing. For the listing of Mandatory Rollup Patches for Oracle Payroll please view: Doc ID 295406.1: Mandatory Family Pack/Rollup Patch (RUP) Levels for Oracle Payroll. For the listing of Mandatory Patches for the HRMS Suite please view: Doc ID 1160507.1: Oracle E-Business Suite HCM Information Center – Consolidated HRMS Mandatory Patch List. For information on the latest Rollup Patches (RUPs) for the HRMS Suite please view: Doc ID 135266.1: Oracle HRMS Product Family – Release 11i & 12 Information.

    Read the article

  • How to monetize and protect a engine's and its framework's copyrights and patents?

    - by Arthur Wulf White
    I created a game engine that handles: Rendering levels with 2d textured curved surfaces Collisions with curved surfaces Animationn paths on and navigation in 2d-sapce I have also made a framework for: Procedural organic level generation with round surfaces Level editing Light weight sprite design The engine and framework are written in AS3 and I am in the process of translating the code into HaXe to better support other platforms. I am also interested in adding Animated curved platforms More advanced level editing features Currently, I have a part time job and any time I spend on this engine is either taken out of my limited free time (I'm a student working to support myself through school) or out my time working at my job. I really believe this engine can make life much easier for people designing Tower Defence games, Shooters and and Platformers while also possibly improving their results. It could also support RTS, RPGs and racing games very well. It continains original algorithms that could be used for procedural generation of organic round and smooth levels. The algorithms I used are new and are not available in any other level editor I've seen. In order to constantly improve the Engine and have it tested thoroughly I think the best route is releasing it to the public. What are the best ways to benefit myself and others with my new framework? I want to have some lisence, allowing me to share the framework and still benefit from it. Any advice would be appreciated. This issue has been on my mind a lot this year. I am hoping to find a solution that will bring me some relief. I am thinking of designing three sample games, releasing them and starting a kickstarter, any advice and thoughts on the matter would be valuable. My goal is like Markus von Broady suggested, to get people involved in developing the engine and let people use it for games for either a symbolic fee or for free and charge for support. That or use some form of croud sourcing. Do I need to hire a lawyer to get some sort of legal document to protect my work?

    Read the article

  • Storing Tiled Level Data in J2ME game

    - by Alex
    I'm developing a J2ME game which uses tiled backgrounds for the levels. My question is how do I store this tile information in my game. At the moment it is stored as an array; with each number representing a different tile from the tile-sheet. This works well enough, however I don't like the fact that it is 'hard-coded' into the game because (at least in my opinion) it is harder to edit the levels, or design new ones. I was also thinking that it would be difficult if you wanted to add a 'level pack', I'm not sure on how this would be achieved though; it's not something I was planning on doing, I'm just curious. I was wondering if there was a way I could store level data in some external file and then load this in to the game. The problem is I don't know what the limitations are for J2ME regarding file I/O, can it read in any file like Java? I am aware of the RMS, but from my experience I don't think this would work (unless I am mistaken). Also, would loading the data in this way be too big a performance hit? Or is there another way I can achieve what I am trying to do. As I said, the way I have it at the moment works fine, and if this is the only viable option then it will suffice.

    Read the article

  • Fastest way to parse big json android

    - by jem88
    I've a doubt that doesn't let me sleep! :D I'm currently working with big json files, with many levels. I parse these object using the 'default' android way, so I read the response with a ByteArrayOutputStream, get a string and create a JSONObject from the string. All fine here. Now, I've to parse the content of the json to get the objects of my interest, and I really can't find a better way that parse it manually, like this: String status = jsonObject.getString("status"); Boolean isLogged = jsonObject.getBoolean("is_logged"); ArrayList<Genre> genresList = new ArrayList<Genre>(); // Get jsonObject with genres JSONObject jObjGenres = jsonObject.getJSONObject("genres"); // Instantiate an iterator on jsonObject keys Iterator<?> keys = jObjGenres.keys(); // Iterate on keys while( keys.hasNext() ) { String key = (String) keys.next(); JSONObject jObjGenre = jObjGenres.getJSONObject(key); // Create genre object Genre genre = new Genre( jObjGenre.getInt("id_genre"), jObjGenre.getString("string"), jObjGenre.getString("icon") ); genresList.add(genre); } // Get languages list JSONObject jObjLanguages = jsonObject.getJSONObject("languages"); Iterator jLangKey = jObjLanguages.keys(); List<Language> langList = new ArrayList<Language>(); while (jLangKey.hasNext()) { // Iterate on jlangKey obj String key = (String) jLangKey.next(); JSONObject jCurrentLang = (JSONObject) jObjLanguages.get(key); Language lang = new Language( jCurrentLang.getString("id_lang"), jCurrentLang.getString("name"), jCurrentLang.getString("code"), jCurrentLang.getString("active").equals("1") ); langList.add(lang); } I think this is really ugly, frustrating, timewaster, and fragile. I've seen parser like json-smart and Gson... but seems difficult to parse a json with many levels, and get the objects! But I guess that must be a better way... Any idea? Every suggestion will be really appreciated. Thanks in advance!

    Read the article

  • Employee Engagement: Drive Business Value

    - by Kellsey Ruppel
    As we’ve been discussing this week, employee engagement is extremely important and you’ve probably realized that effectively engaging your employees is essential to driving business value. Your employees are the ones responsible for executing on the business’ objectives. Your employees (in the sales & service departments) are the ones interacting with your customers the most, so delivering on customer expectations and attaining high levels of customer engagement are simply not possible without successfully empowering these this stakeholder group. High employee and partner engagement can have many benefits including: Higher levels of employee productivity Longer employee retention Stronger, more enduring and more successful relationships Serving as ambassadors for an organization’s brand More likely to deliver excellent customer service Referring others for hire Recommending the organization’s products and services Sharing feedback with their colleagues In a way, engagement is a measure of employee investment in an organization’s mission and brand. And then you have the enablement piece of this as well.  It’s hard to imagine a high level of engagement existing among employees who don’t feel that they’ve been enabled to do their jobs very efficiently or effectively. You’re just not going to find high engagement among people if the everyday processes and technologies  they work with make it a challenge for them to access, share and manage the information  they need do their jobs or if they’re unable to effectively collaborate around the projects they’re working on. How does your organization measure on the employee engagement spectrum? We’ve got a number of different resources to help you get started! Portal Resource Center Video: Got a minute? WebCenter in Action Webcast Series Portal Engagement Webcast 

    Read the article

  • Any empirical evidence on the efficacy of CMMI?

    - by mehaase
    I am wondering if there are any studies that examine the efficacy of software projects in CMMI-oriented organizations. For example, are CMMI organizations more likely to finish projects on time and/or on budget than non-CMMI organizations? Edit for clarification: CMMI stands for "Capability Maturity Model Integration". It's developed by the Software Engineering Institute at Carnegie-Mellon University (SEI-CMU). It's not a certification, but there are various companies that will "appraise" your organization to various levels of CMMI, such as level 2 and level 3. (I believe CMMI level 1 is an animalistic, Hobbesian free-for-all that nobody aspires to. In other words, everybody is at least CMMI level 1, even if you've never heard of CMMI before.) I'm definitely not an expert, but I believe that an organization can be appraised for CMMI levels within different scopes of work: i.e. service delivery, software development, foobaring, etc. My question is focused on the software development appraisal: is an organization that has been appraised to CMMI Level X for software projects more likely to finish a software project on time and on budget than another organization that has not been appraised to CMMI Level X? However, in the absence of hard data about software-oriented CMMI, I'd be interested in the effect that CMMI appraisals have on other activities as well. I originally asked the question because I've seen various studies conducted on software (e.g. the essays in The Mythical Man Month refer to numerous empirical studies, as does McConnell's Code Complete), so I know that there are organizations performing empirical studies of software development.

    Read the article

  • Support our movement?

    - by Mirchi Sid
    | Imagine Freedom 2013 | mnearth Student Programs This is to inform you about the world’s first online student competition called imagine freedom which is conducted by mnearth corporation India .The imagine freedom will be one of the most popular student IT competition in the world .The program will be fully functioned with free and open source software and operating system like Ubuntu Linux and Linux . The Competition have a lot of other categories like web designing , software development and much more .The program coordinates will contact your schools for the selection process and giving the first steps for registration . If you are an expert in Open source free software? Then it is the time for you ! Otherwise do you know anyone who have the skills ? Then inform them about the program . The competitions will be done as a part of Mnearth Student Programs . The program schedule and the local competition information will be send after getting the applications . The competitions are categorized into three . |Categories Of Participants Animation Films Multimedia Presentation 3D Animation Web Designing Software Development Innovations Cloud Apps Games etc. . . . . . |Levels Of Participants High School Level Higher Secondary Level Collage Level University Level |High School level This levels is for the students who is students . It’s age limit is 12 - 24 years . The competitions will be started on this year for selecting the good students who have the talent . For more information Send to : [email protected] Call us on :04936312206 (india) Join with the Community on facebook : follow us on twitter : www.twitter.com/imaginatingkids www.facebook.com/imaginefreedomonce

    Read the article

  • Fast, accurate 2d collision

    - by Neophyte
    I'm working on a 2d topdown shooter, and now need to go beyond my basic rectangle bounding box collision system. I have large levels with many different sprites, all of which are different shapes and sizes. The textures for the sprites are all square png files with transparent backgrounds, so I also need a way to only have a collision when the player walks into the coloured part of the texture, and not the transparent background. I plan to handle collision as follows: Check if any sprites are in range of the player Do a rect bounding box collision test Do an accurate collision (Where I need help) I don't mind advanced techniques, as I want to get this right with all my requirements in mind, but I'm not sure how to approach this. What techniques or even libraries to try. I know that I will probably need to create and store some kind of shape that accurately represents each sprite minus the transparent background. I've read that per pixel is slow, so given my large levels and number of objects I don't think that would be suitable. I've also looked at Box2d, but haven't been able to find much documentation, or any examples of how to get it up and running with SFML.

    Read the article

  • CLR Profiler Allocated Bytes and XNA ContentManager

    - by Vackup
    I've been fighting with XNA ContentManager and memory allocations for some weeks because I'm trying to port my game from XNA (Windows) to ExEn / Monotouch (iphone). The problem is that after playing a few levels, my game exits unexpectedly on a real iPhone device (not simulator). Profiling memory usage on Windows with CLRProfile, I found some useful stuff but I also found something I dont understand. If I use 2 ContentManagers (1 for shared assets and 1 for level assets), when profiling, "Allocated Bytes" grows and grows after level through level but Memory consumption measured by Windows Task Manager stays constant (down when I unload the content manager and up again when I load content). Obviously, I contentManager.Unload() when level ends. After a few levels my game exits unexpectedly on an iPhone device. If I use 1 content manager, "CRLProfiler Allocated Bytes" stays constant on Windows and on the iPhone; I can play the game normally and it doesnt exit unexpectedly. I use the same assets level through level. It seems like in ios (iPhone) when loading and unloading the same assets, it allocates memory and consumes all device memory, so the ios kill it. Can anybody explain me how this really works? I've read quite a bit, but I still don't understand what's going on.

    Read the article

  • Change the Log Level of Node Manager.

    - by adejuanc
    This is useful to troubleshoot issues related to Node Manager, such as problems starting a Managed Server or reasons a server could be (re)started. To change the Log Level of Node Manager, you need to edit the nodemanager.properties file. This is usually located at: <MIDDLEWARE_HOME>/wlserver_10.3/common/nodemanager What you need to modify is property: ...LogLevel=INFO... Information about the appropriate values for this property is available in the Node Manager Documentation at 10.3 WebLogic Documentation (and in further releases) which states: LogLevel: Severity level of logging used for the Node Manager log. Node Manager uses the same logging levels as WebLogic Server. Default value: INFO However, this is incorrect. WLS has its own implementation of LogLevel, but Node Manager uses the standard Log Level from the java.util.logging.Level class. Therefore, the possible values for Node Manager LogLevel, in descending order are: SEVERE (highest value) WARNING INFO CONFIG FINE FINER FINEST (lowest value) The highest value provides only messages at the severe level. The warning level provides warning messages and severe messages, and so on. Besides those levels, ALL and OFF are also accepted. For example, if you only want Severe messages to be logged, select SEVERE. If you need the most detailed tracing available, select FINEST. For more information on what it will log at each level, please read the Java SE API for LoggingLevel.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >