Search Results

Search found 7073 results on 283 pages for 'shared printers'.

Page 179/283 | < Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >

  • How to remove Analyze option from the report in OBI 11.1.1.7.0 ?

    - by Varun
    Que) How to remove Analyze option from the report in OBI 11.1.1.7.0 ? Ans) You can change the properties of a dashboard and its pages. Specifically, you can: Change the style and description of the dashboard Add hidden named prompts to the dashboard and to its pages Specify which links (Analyze, Edit, Refresh, Print, Export, Add to Briefing Book, and Copy) are to be included with analyses at the dashboard level. Note that you can set these links at the dashboard page level and the analysis level, which override the links that you set at the dashboard level.  Rename, hide, reorder, set permissions for, and delete pages. Specify which accounts can save shared customizations and which accounts can assign default customizations for pages, and set account permissions. Specify whether the Add to Briefing Book option is to be included in the Page Options menu for pages. To change the properties of a dashboard and its pages: Edit the dashboard.  Click the Tools toolbar button and select Dashboard Properties. The "Dashboard Properties dialog" is displayed. Make the property changes that you want and click OK. Click the Save toolbar button.

    Read the article

  • Intel graphic chipset and NVIDIA Geforce GTX560

    - by antoine
    I have an NVIDIA Geforce GTX560 with two video projectors and I would like to use the onboard Intel Graphic Chipset to plug an additional monitor. I saw the question : How can I use both Intel onboard and Nvidia graphics at the same time? but the answer is so short that I was not convinced. My motherboard (GIGABYTE GA-H61M-D2P-B3 (rev. 1.0)) equipped with Intel H61 Chipset allow shared memory between onboard and PCIe cards. And Windows 7 allow me to use the three outputs thanks to Intel's driver. I'm able to use the onboard graphic card but without graphical interface for now. I think i need intel driver for that. But I would like to know if I can setup my displays in xorg.conf with something like : Section "Device" Identifier "Device0" Driver "intel" EndSection Section "Device" Identifier "Device1" Driver "nvidia" EndSection Section "Device" Identifier "Device2" Driver "nvidia" EndSection Does anyone have successfully setup something like that ? Or should I burn my head experimenting it by myself ? Or is there any good reasons to discouraged me to try ? Thanks for your help. Antoine PS : i'm using Ubuntu 10.10 for now, but I could switch to another version. PS2 : i also read this : Use 3 monitors w/built-in intel adapter + two old nvidia PCI cards on 10.10? which doesn't tell me more about the possibilities to use Intel Graphic and Nvidia at the same time EDIT : according to that : Can not get Dual Monitors to work on Different GPUs, I should be able to run two Xserver one on Intel the other on Nvidia. I will try and post the result here.

    Read the article

  • Top tweets SOA Partner Community – November 2012

    - by JuergenKress
    Dear SOA partner community member Too many different product from Oracle, no idea how do they fit together? Get a copy of the Oracle catalog, an excellent overview of the Oracle middleware portfolio. BPM is a key solution to this portfolio. To position BPM to your customers you can find many use case ideas in the paper BPM 11g Patterns and industry specific value propositions for Financial Services & Insurance & Retail. Many more Process Accelerators (11.1.1.6.2) have become available. It is an excellent demo and starting point for BPM projects. Our SOA Suite team published the most important OOW presentation at the OTN website. The Oracle SOA proactive support team is running a series of blog posts about SOA and JMS Introductory. To become an expert in SOA, Bob highlighted the latest list of SOA books. For OSB projects we recommend the EAIESB OSB poster. Thanks to all the experts who contributed and shared their SOA & BPM knowledge this month again. Please feel free to send us the link to your blog post via twitter @soacommunity: Undeploy multiple SOA composites with WLST or ANT by Danilo Schmiedel Fault Handling Slides and Q&A by Vennester Installing Oracle Event Processing 11g by Antoney Reynolds Expanding the Oracle Enterprise Repository with functional documentation by Marc Kuijpers Build Mobile App for E-Business Suite Using SOA Suite and ADF Mobile By Michelle Kimihira A brief note for customers running SOA Suite on AIX platforms By Christian ACM - Adaptive Case Management by Peter Paul BPM 11g - Dynamic Task Assignment with Multi-level Organization Units By Mark Foster Oracle Real User Experience Insight: Oracle's Approach to User Experience Hope to see you at the Middleware Day at UK Oracle User Group Conference 2012 in Birmingham. Jürgen Kress Oracle SOA & BPM Partner Adoption EMEA To read the newsletter please visit http://tinyurl.com/soanewsNovember2012 (OPN Account required) To become a member of the SOA Partner Community please register at http://www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Community newsletter,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • Entity Framework and distributed Systems

    - by Dirk Beckmann
    I need some help or maybe only a hint for the right direction. I've got a system that is sperated into two applications. An existing VB.NET desktop client using Entity Framework 5 with code first approach and a asp.net Web Api client in C# that will be refactored right yet. It should be possible to deliver OData. The system and the datamodel is still involving and so migrations will happen in undefined intervalls. So I'm now struggling how to manage my database access on the web api system. So my favourd approch would be us Entity Framework on both systems but I'm running into trouble while creating new migrations. Two solutions I've thought about: Shared Data Access dll The first idea was to separate the data access layer to a seperate project an reference from each of the systems. The context would be the same as long as the dll is up to date in each system. This way both soulutions would be able to make a migration. The main problem ist that it is much more complicate to update a web api system than it is with the client Click Once Update Solution and not every migration is important for the web api. This would couse more update trouble and out of sync libraries Database First on Web Api The second idea was just to use the database first approch an on web api side. But it seems that all annotations will be lost by each model update. Other solutions with stored procedures have been discarded because of missing OData support and maintainability. Does anyone run into same conflicts or has any advices how such a problem can be solved!

    Read the article

  • ArchBeat Top 20 for March 25-31, 2012

    - by Bob Rhubart
    The top 20 most-clicked links as shared via my social networks for the week of March 25-31, 2012. Oracle Cloud Conference: dates and locations worldwide The One Skill All Leaders Should Work On | Scott Edinger BPM in Retail Industry | Sanjeev Sharma Oracle VM: What if you have just 1 HDD system | @yvelikanov Solution for installing the ADF 11.1.1.6.0 Runtimes onto a standalone WLS 10.3.6 | @chriscmuir Beware the 'Facebook Effect' when service-orienting information technology | @JoeMcKendrick Using Oracle VM with Amazon EC2 | @pythianfielding Oracle BPM: Adding an attachment during the Human Task Initialization | Manh-Kiet Yap When Your Influence Is Ineffective | Chris Musselwhite and Tammie Plouffe Oracle Enterprise Pack for Eclipse 12.1.1 update on OTN  A surefire recipe for cloud failure | @DavidLinthicum  IT workers bore brunt of offshoring over past decade: analysis | @JoeMcKendrick Private cloud-public cloud schism is a meaningless distraction | @DavidLinthicum Oracle Systems and Solutions at OpenWorld Tokyo 2012 Dissing Architects, or "What's wrong with the coffee?" | Bob Rhubart Validating an Oracle IDM Environment (including a Fusion Apps build out) | @FusionSecExpert Cookbook: SES and UCM setup | George Maggessy Red Samurai Tool Announcement - MDS Cleaner V2.0 | @AndrejusB OSB/OSR/OER in One Domain - QName violates loader constraints | John Graves Spring to Java EE Migration, Part 3 | @ensode Thought for the Day "Inspire action amongst your comrades by being a model to avoid." — Leon Bambrick

    Read the article

  • How is constant buffer allocation handled in DX11?

    - by Marek
    I'm starting with DX11 and I'm not sure if I'm doing the things right. I want to have both pixel and vertex shader program in one file. Both use some shared and some different constant buffers. So it looks like this: Shader.fx cbuffer ForVS : register(b0) { float4x4 wvp; }; cbuffer ForVSandPS : register(b1) { float4 stuff; float4 stuff2; }; cbuffer ForVS2 : register(b2) { float4 stuff; float4 stuff2; }; cbuffer ForPS : register(b3) { float4 stuff; float4 stuff2; }; .... And in code I use mContext->VSSetConstantBuffers( 0, 1, bufferVS); mContext->VSSetConstantBuffers( 1, 1, bufferVS_PS); mContext->VSSetConstantBuffers( 2, 1, bufferVS2); mContext->PSSetConstantBuffers( 1, 1, bufferVS_PS); mContext->PSSetConstantBuffers( 3, 1, bufferPS); The numbering of buffers in PS is what bugs me, is it alright to bind random slots to shaders (in this example 1 and 3)? Does that mean it still uses just two buffers or does it initialize 0 and 2 buffer pointers to empty? Thank you.

    Read the article

  • mod_rewrite works within directory not on root

    - by Anvesh Saxena
    I am having problem in my RewriteRule for the tags portion. What I am able to debug is that the rule is been triggered at least because the page "tags.php" is been rendered but without the URL parameters. This .htaccess file with the rules is within root for my sub-domain and has following content for tags postion. # Rewrite rule for tags RewriteRule ^tags/(\w+)/(\d+)/?$ tags.php?tag_name=$1&tag_id=$2 RewriteRule ^tags/(\w+)/?$ tags.php?tag_name=$1 RewriteRule ^tags/?$ tags.php?tag_name= Another problem that I ain't able to debug is that the similar .htaccess file exists for a directory within my sub domain and is working as expected with the necessary URL parameters also been available. The .htaccess file within the directory reads as follows # Rewrite rule for tags RewriteRule ^tags/(\w+)/(\d+)/?$ restAPI.php?type=tags&tag_name=$1&tag_id=$2 RewriteRule ^tags/(\w+)/?$ restAPI.php?type=tags&tag_name=$1 RewriteRule ^tags/?$ restAPI.php?type=tags&tag_name= Could anyone point me the problem that I might be having in my Rewrite rules, I am also facing Internal server error sometimes which I am second guessing is due to the linked problem. Note:- I have Apache version 2.2.23 on my shared hosting.

    Read the article

  • links for 2011-01-03

    - by Bob Rhubart
    Using Solaris zfs + iscsi targets with Oracle VM (Wim Coekaerts Blog) "I was playing with my Oracle VM setup and needed some shared storage that was block based. I did not have a storage array available but I did have a Solaris box, that I use for Oracle VDI, available." - Wim Coekaerts (tags: oracle otn solaris oraclevm virtualization) DanT's GridBlog: Oracle Grid Engine: Changes for a Bright Future at Oracle "Today, we are entering a new chapter in Oracle Grid Engine’s life. Oracle has been working with key members of the open source community to pass on the torch for maintaining the open source code base to the Open Grid Scheduler project hosted on SourceForge." - Dan Templeton (tags: oracle gridengine) Oracle Fusion Middleware Security: How do I secure my services? "I've been up early for a couple of days talking to a customer about how they should secure their services,' says Chris Johnson. "I'm going to tell you what I told them." (tags: oracle fusionmiddleware security) OldSpice your Innovation - Dangers of Status Quo E2.0 | Enterprise 2.0 Blogs "If organizations only leverage E2.0 technologies in a 'me too' fashion, they are essentially using a bucket to bail water from a leaking ship." - John Brunswick (tags: oracle enteprise2.0) The Aquarium: GlassFish in 2011 - What to expect A look into the Glassfish crystal ball... (tags: oracle glassfish) Andrejus Baranovskis's Blog: Fusion Middleware 11g Security - Retrieve Security Groups from ADF 11g Oracle ACE Director Andrejus Baranovskis shows you what to do when you need to access security information directly from an ADF 11g application. (tags: oracle otn fusionmiddleware security adf) @eelzinga: Book review : Oracle SOA Suite 11g R1 Developer's Guide "What I really liked in this book...was the compare/description of the Oracle Service Bus. The authors did a great job on describing functionality of components existing in the SOA Suite and how to model them in your own process." - Oracle ACE Eric ElZinga (tags: oracle oracleace soa bookreview soasuite)

    Read the article

  • Middle tier language for interfacing C/C++ with db and web app

    - by ggkmath
    I have a web application requiring a middle-tier language to communicate between an oracle database and math routines on a Linux server and a flex-based application on a client. I'm not a software expert, and need recommendations for which language to use for the middle-tier. The math routines are currently in Matlab but will be ported to C (or C++) as shared libraries. Thus, by default there's some C or C++ communication necessary. These routines rely on FFTW (www.fftw.org), which is called directly from C or C++ (thus, I don't see re-writing these routines in another language). The middle tier must manage traffic between the client, the math routines, and the Oracle database. The client will trigger the math routines aynchronously, and the results saved in the db and transferred back to the client, etc. The middle-tier will also need to authenticate user accounts/passwords, and send out various administrative emails. Originally I thought PhP the obvious choice, but interfacing asychronously multiple clients with the C or C++ routines doesn't seem straightforward. Then I thought, why not just keep the whole middle tier in C or C++, but I'm not sure if this is done in the industry (C or C++ doesn't seem as web-friendly as other languages). There's always Jave + JNI, but maybe that introduces other complications (not sure). Any feedback appreciated.

    Read the article

  • Automated backups for Windows Azure SQL Database

    - by Greg Low
    One of the questions that I've often been asked is about how you can backup databases in Windows Azure SQL Database. What we have had access to was the ability to export a database to a BACPAC. A BACPAC is basically just a zip file that contains a bunch of metadata along with a set of bcp files for each of the tables in the database. Each table in the database is exported one after the other, so this does not produce a transactionally-consistent backup at a specific point in time. To get a transactionally-consistent copy, you need a database that isn't in use.The easiest way to get a database that isn't in use is to use CREATE DATABASE AS COPY OF. This creates a new database as a transactionally-consistent copy of the database that you are copying. You can then use the export options to get a consistent BACPAC created.Previously, I've had to automate this process by myself. Given there was also no SQL Agent in Azure, I used a job in my on-premises SQL Server to do this, using a linked server configuration.Now there's a much simpler way. Windows Azure SQL Database now supports an automated export function. On the Configuration tab for the database, you need to enable the Automated Export function. You can configure how often the operation is performed for you, and which storage account will be used for the backups.It's important to consider the cost impacts of this as well. You are charged for how ever many databases are on your server on a given day. So if you enable a daily backup, you will double your database costs. Do not schedule the backups just before midnight UTC, as that could cause you to have three databases each day instead of one.This is a much needed addition to the capabilities. Scott Guthrie also posted about some other notable changes today, including a preview of a new premium offering for SQL Database. In addition to the Web and Business editions, there will now be a Premium edition that has reserved (rather than shared) resources. You can read about it all in Scott's post here: http://weblogs.asp.net/scottgu/archive/2013/07/23/windows-azure-july-updates-sql-database-traffic-manager-autoscale-virtual-machines.aspx

    Read the article

  • How to retain secondary hard drive mounts at reboot and keep shares?

    - by Tom
    I'm running Ubuntu 12.04. A second hard drive connected to this computer does not mount when the computer boots. Additionally, I have set up the drive to be shared but the share is not retained, the share is lost after each boot. My main system drive and a removable drive mount OK and shares remain between boots. Additional information follows: D2Linux sda1 is the secondary hard drive L-Freeagent sdc1 is the removeable drive Here is the contents of fstab immediately after booting (D2Linux /dev/sda1 not yet mounted): '# /etc/fstab: static file system information. ' '# ' '# Use 'blkid' to print the universally unique identifier for a ' '# device; this may be used with UUID= as a more robust way to name devices ' '# that works even if disks are added and removed. See fstab(5). ' '# ' '# ' proc /proc proc nodev,noexec,nosuid 0 0 '# / was on /dev/sdb1 during installation ' UUID=43d29a82-66b3-40f3-91ed-735a27a60004 / ext4 errors=remount-ro 0 1 '# swap was on /dev/sdb5 during installation UUID=cf8e3351-11d0-487a-8a6e-e499c2e88a10 none swap sw ' 0 0 Here is the output of mount with all drives mounted (I did not restore the share): /dev/sdb1 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) gvfs-fuse-daemon on /home/tom/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=tom) /dev/sdc1 on /media/L-Freeagent type ext4 (rw,nosuid,nodev,uhelper=udisks) /dev/sda1 on /media/D2Linux type ext4 (rw,nosuid,nodev,uhelper=udisks) Thank you!

    Read the article

  • Distinguishing between UI command & domain commands

    - by SonOfPirate
    I am building a WPF client application using the MVVM pattern that provides an interface on top of an existing set of business logic residing in a library which is shared with other applications. The business library followed a domain-driven architecture using CQRS to separate the read and write models (no event sourcing). The combination of technologies and patterns has brought up an interesting conundrum: The MVVM pattern uses the command pattern for handling user-interaction with the view models. .NET provides an ICommand interface which is implemented by most MVVM frameworks, like MVVM Light's RelayCommand and Prism's DelegateCommand. For example, the view model would expose a number of command objects as properties that are bound to the UI and respond when the user performs actions like clicking buttons. Many implementations of the CQRS use the command pattern to isolate and encapsulate individual behaviors. In my business library, we have implemented the write model as command / command-handler pairs. As such, when we want to do some work, such as create a new order, we 'issue' a command (CreateOrderCommand) which is routed to the command-handler responsible for executing the command. This is great, clearly explained in many sources and I am good with it. However, take this scenario: I have a ToolbarViewModel which exposes a CreateNewOrderCommand property. This ICommand object is bound to a button in the UI. When clicked, the UI command creates and issues a new CreateOrderCommand object to the domain which is handled by the CreateOrderCommandHandler. This is difficult to explain to other developers and I am finding myself getting tongue-tied because everything is a command. I'm sure I'm not the first developer to have patterns overlap like this where the naming/terminology also overlap. How have you approached distinguishing your commands used in the UI from those used in the domain? (Edit: I should mention that the business library is UI-agnostic, i.e. no UI technology-specific code exists, or will exists, in this library.)

    Read the article

  • Using a permutation table for simplex noise without storing it

    - by J. C. Leitão
    Generating Simplex noise requires a permutation table for randomisation (e.g. see this question or this example). In some applications, we need to persist the state of the permutation table. This can be done by creating the table, e.g. using def permutation_table(seed): table_size = 2**10 # arbitrary for this question l = range(1, table_size + 1) random.seed(seed) # ensures the same shuffle for a given seed random.shuffle(l) return l + l # see shared link why l + l; is a detail and storing it. Can we avoid storing the full table by generating the required elements every time they are required? Specifically, currently I store the table and call it using table[i] (table is a list). Can I avoid storing it by having a function that computes the element i, e.g. get_table_element(seed, i). I'm aware that cryptography already solved this problem using block cyphers, however, I found it too complex to go deep and implement a block cypher. Does anyone knows a simple implementation of a block cypher to this problem?

    Read the article

  • Sharing business logic between server-side and client-side of web application?

    - by thoughtpunch
    Quick question concerning shared code/logic in back and front ends of a web application. I have a web application (Rails + heavy JS) that parses metadata from HTML pages fetched via a user supplied URL (think Pinterest or Instapaper). Currently this processing takes place exclusively on the client-side. The code that fetches the URL and parses the DOM is in a fairly large set of JS scripts in our Rails app. Occasionally want to do this processing on the server-side of the app. For example, what if a user supplied a URL but they have JS disabled or have a non-standard compliant browser, etc. Ideally I'd like to be able to process these URLS in Ruby on the back-end (in asynchronous background jobs perhaps) using the same logic that our JS parsers use WITHOUT porting the JS to Ruby. I've looked at systems that allow you to execute JS scripts in the backend like execjs as well as Ruby-to-Javascript compilers like OpalRB that would hopefully allow "write-once, execute many", but I'm not sure that either is the right decision. Whats the best way to avoid business logic duplication for apps that need to do both client-side and server-side processing of similar data?

    Read the article

  • Jumpstart your MySQL Cluster Knowledge

    - by Antoinette O'Sullivan
    Join companies in the web, gaming, telecoms and mobile areas by learning about MySQL Cluster's distributed, shared-nothing, real-time design. The 3 days, MySQL Cluster course teaches you how to configure and manage the cluster nodes to ensure high availability. Learn how to install different nodes and understand cluster internals. Here is a sample of some events on the schedule for this course:  Location  Date  Delivery Language  Wien, Austria  4 February, 2013 German   Prague, Czech Republic  10 December, 2012 Czech   London, England  12 December, 2012 English   Hamburg, Germany  21 January, 2013  German  Stuttgart, Germany  26 March, 2013  German  Budapest, Hungary  4 December, 2012  Hungarian  Warsaw, Poland  10 December, 2012  Polish  Lisbon, Portugal  3 December, 2012 European Portugese   Barcelona, Spain  19 November, 2012 Spanish   Madrid, Spain  25 February, 2013 Spanish   Jakarta, Indonesia  21 January, 2013 English   Singapore  29 October, 2012 English   Chicago, United States  27 March, 2013  English  Reston, United States  6 February, 2013  English For more information on the authentic MySQL curriculum go to http://oracle.com/education/mysql

    Read the article

  • Creating, using and managing XML component dictionaries quick tutorials

    - by drrwebber
    XML Component Dictionary capabilities are provided in conjunction with the CAM Editor toolset.  These dictionaries accelerate the development of consistent XML information exchanges using standard sets of dictionary components. The quick tutorials are aimed at showing the 'how to' of the basic capabilities to jump start use of XML dictionaries with the CAM Editor. The collection of dictionary tutorials videos run for a total of approximately 20 minutes.  Each video can be reviewed individually also. Learn how to use the dictionary functions to create dictionaries by harvesting data model components from existing XSD schema, SQL database table schema, or simple Excel / Open Office spreadsheets with tables of components listed.Also included are tips and functions relating to use of NIEM exchange development, IEPD and EIEM techniques.These videos should be viewed in conjunction with reviewing the overall concepts and techniques described in the companion video on the CAM Editor and Dictionaries overview.  The approach is aligned with OASIS and Core Components Technical Specification (CCTS) standards specifications for XML components and dictionaries.Dictionary collections can be stored locally on the file system, or local network, or collaboratively on the web or cloud deployment, or can be shared and managed securely using the Oracle Enterprise Repository (OER) tool. Also included are techniques relating to the use of the NIEM approach for developing XML exchange schema and IEPD packages.  This includes generating reuse scores, wantlist, and cross reference spreadsheets. Included in the latest release of the CAM Editor is the ability to use the analyse dictionary tool to determine duplicate components, conflicting component definitions, missing component descriptions and so on.  This ensures high quality dictionary component specifications.  Using the CAM Editor you can also create MindMap models and UML physical models of your dictionary components sets. For a complete guide to using the CAM Editor see the main YouTube video tutorials website and the CAM Editor website.

    Read the article

  • The Oracle Platform

    - by Naresh Persaud
    Today’s enterprises typically create identity management infrastructures using ad-hoc, multiple point solutions. Relying on point solutions introduces complexity and high cost of ownership leading many organizations to rethink this approach. In a recent worldwide study of 160 companies conducted by Aberdeen Research, there was a discernible shift in this trend as businesses are now looking to move away from the point solution approach from multiple vendors and adopt an integrated platform approach. By deploying a comprehensive identity and access management strategy using a single platform, companies are saving as much as 48% in IT costs, while reducing audit deficiencies by nearly 35%. According to Aberdeen's research, choosing an integrated suite or “platform” of solutions for Identity Management from a single vendor can have many advantages over choosing “point solutions” from multiple vendors. The Oracle Identity Management Platform is uniquely designed to offer several compelling benefits to our customers.  Shared Services: Instead of separate solutions for - Administration, Authentication, Authorization, Audit and so on–  Oracle Identity Management offers a set of share services that allows these services to be consumed by each component in the stack and by developers of new applications  Actionable Intelligence: The most compelling benefit of the Oracle platform is ” Actionable intelligence” which means if there is a compliance violation, the same platform can fix it. And If a user is logging in from an un-trusted device or we detect an attack and act proactively on that information. Suite Interoperability: With the oracle platform the components all connect and integrated with each other. So if an organization purchase the platform for provisioning and wants to manage access, then the same platform can offer access management which leads to cost savings. Extensible and Configurable: With point solutions – you typically get limited ability to extend the tool to address custom requirements. But with the Oracle platform all of the components have a common way to extend the UI and behavior Find out more about the Oracle Platform approach in this presentation. Platform approach-series-the oracleplatform-final View more PowerPoint from OracleIDM

    Read the article

  • VCE at the VCS!?!?

    - by John Murphy
    VCE stands for Value Chain Execution, VCS stands for Value Chain Summit and in February in San Francisco, VCE will be fully represented at the VCS. The Value Chain Summit is Oracle's first large scale Supply Chain Management event specifically aimed at both current and prospective users of Oracle Supply Chain Management applications. This inaugural event is Feb 4-6, 2013 in downtown San Francisco.  Over 1000 attendees will meet to discuss and see what's new in product releases, what recent business trends are impacting supply chains, how technology is evolving, where supply chains are headed, and what companies are doing about it.  As the market leader in Value Chain Execution applications, VCE sessions and demonstrations will provide attendees direct access to the most sophisticated logistics applications in the world.  Already a user of VCE applications?   That's all the more reason to attend as sessions are specifically designed to address the latest features in the upcoming 6.3 release.  Detailed content will be shared by development and strategy personnel so you can get all the answers you need to improve your use of the VCE applications you currently have deployed.   Please join us in San Francisco in February!  

    Read the article

  • Video game "Gish" will only launch from command line

    - by aberration
    Platform: Lubuntu 11.10 x64 Program: Gish When I try to launch Gish from the command line (/opt/gish/gi.sh), there are no problems. But when I try to launch it from the LXDE menu, it will not start. Contents of /usr/share/applications/gish.desktop: [Desktop Entry] Categories=Game;ActionGame;AdventureGame;ArcadeGame; Exec=/opt/gish/gi.sh Path=/opt/gish Icon=x-gish Terminal=false Type=Application Name=Gish I tried changing Terminal=false to Terminal=true to debug it, but then I just got a blank terminal, and the game didn't start. Edit: Here is some additional information, as requested by Eliah Kagan below: I tried editing /usr/share/applications/gish.desktop, as recommended, but it had no effect However, ~/.xsession-errors contained the following error: [: 8: x86_64: unexpected operator ./gish_32: error while loading shared libraries: libGL.so.1: wrong ELF class: ELFCLASS64 I think there's a problem with the /opt/gish/gi.sh shell script. This is its contents: cd /opt/gish/ MACHINE_TYPE=`uname -m` if [ ${MACHINE_TYPE} == 'x86_64' ]; then ./gish_64 else ./gish_32 fi I'm not too familiar with Bash, so hopefully someone else can point out the error. I have a 64-bit machine. I think that when the script is run from the command line, it's properly launching the 64-bit version (/opt/gish/gish_64), but when it's run from the LXDE menu, it's launching the 32-bit version (/opt/gish/gish_32), which is causing the libGL.so.1 error. However, this may be related to my libGL.so.1 problems with 2 other games.

    Read the article

  • DIY Halloween Decoration Uses Simple Silohuettes

    - by Jason Fitzpatrick
    While many of the Halloween decorating tricks we’ve shared over the years involve lots of wire, LEDs, and electronic guts, this one is thoroughly analog (and easy to put together). A simple set of silhouettes can cheaply and quickly transform the front of your house. Courtesy of Matt over at GeekDad, the transformation is easy to pull off. He explains: It’s really just about as simple as you could hope for. The materials needed are: black posterboard or black-painted cardboard; colored cellophane or tissue paper; and tape. The only tools needed are: measuring tape; some sort of drawing implement — chalk works really well; and scissors and/or X-Acto knife. And while you need some drawing talent, the scale is big enough and the need for precision little enough that you don’t need that much. For a more thorough rundown of the steps hit up the link below or hit up Google Images to find some monster silhouette inspiration. Window Monsters [Geek Dad] How Hackers Can Disguise Malicious Programs With Fake File Extensions Can Dust Actually Damage My Computer? What To Do If You Get a Virus on Your Computer

    Read the article

  • Use two networks at the same time?

    - by Christopher
    I want to use Ubuntu 10.10 Server in a classroom, a computer lab whose bandwidth is provided by a local cable ISP. That's no problem, though the school network has an IP printer that I want to use. I cannot reach the printer through the cable Internet. But, I have two network cards. How is it possible to use both networks at once? eth0 (static 192.168.1.254) is plugged into a four-port router, 192.168.1.1. On the public side of the four-port router is Internet provided by the cable company. I also have the classroom workstations plugged into a switch. The switch is plugged into the four-port router. The whole classroom is wired into the cable Internet. The other NIC, eth1, could it be plugged into an Ethernet jack in the wall? It uses the school network, and I might receive by DHCP an IP address like 10.140.10.100, with the printer on maybe 10.120.50.10. I was thinking about installing the printer on the server so that it could be shared with the workstations. But how does this work? Can I just plug eth1 into the school network and access both LANs? Thanks for any insight

    Read the article

  • Sorting objects before rendering

    - by dreta
    I'm trying to implement a scene graph and in all the articles i've come across there is talk about object sorting. So you'd sort your objects by "material" for example. Now untill i sat down and started implementing it, i kind of took this for granted, because it made sense. But now i'm wondering what does sorting actually change? In my engine, i have a manager for UBOs, i use those to store data that'll be shared between programs, at the moment that only involves time, camera and projection matrices and lights (i'm not worrying about managing which lights affect which objects ATM). Now for each model i have to change the model to world matrix uniform, no sorting is going to change that. So is the jump from changing this matrix to also setting a material for each object that bad? I vaguely remember reading somewhere that each time you change something in the pipeline, it has to get flushed and that can cause performance issues. But for each drawing call i'm setting up a model to world matrix anyway, so what sense does it make to ever be concerned about this? BTW is there any information about whether changing a uniform and calling glBufferSubData is more (or less) expensive.

    Read the article

  • Experiencing the New Social Enterprise

    - by kellsey.ruppel
    Social media and networking tools, popularly known as Web 2.0 technologies, are rapidly transforming user expectations of enterprise systems. Many organizations are investing in these new tools to cultivate a modern user experience in an “Enterprise 2.0” environment that unlocks the full potential of traditional IT systems and fosters collaboration in key business processes. Here are some key points and takeaways from some of the keynotes yesterday at the Enterprise 2.0 Conference: Social networks continue to forge complex connections between people, processes, and content, facilitating collaboration and the sharing of information The customer of today lives inside of Facebook, on your web, or has an app for that – and they have a question – and want an answer NOW Empowered employees are able to connect to colleagues, build relationships, develop expertise, self-select projects of interest to them, and expand skill sets well beyond their formal roles A fundamental promise of Enterprise 2.0 is that ideas will be generated and shared by everyone across the organization, leading to increased innovation, agility, and competitive advantage How well is your organizating delivering on these concepts? Are you able to successfully bring together people, processes and content? Are you providing the social tools your employees want and need? Are you experiencing the new social enterprise?

    Read the article

  • Book Review: Professional ASP.Net MVC4

    - by Sam Abraham
    The past few weeks have been particularly busy as I continue to dedicate a bigger portion of my free time to refreshing my memory and enhancing my knowledge of best practices pertaining to technologies we plan on using for a major upcoming project. In this blog post, I will be providing a brief overview of my latest reading “Professional ASP.Net MVC4” by Jon Galloway, Phil Haack, Brad Wilson and K. Scott Allen. This book is a must read for web developers looking to enhance their MVC expertise with best practices and tips shared from recognized industry experts. This book takes the reader on a 16-chapter long journey towards being a better ASP.NET MVC developer with chapter 16 putting all information covered in practical context by dissecting the implementation of Nuget.org, a real-life open-source, ASP.NET MVC project.  All code samples referenced in this book are conveniently accessible via NuGet, a free, open-source Library package manager that installs as a Visual Studio Extension. Chapters 2, 3 and 4 thoroughly cover MVC’s various components: Controllers “C”, Views “V” and Models “M” respectively. Chapter 5 covers additional extension methods (Helpers) provided to speed and ease the use of common HTML elements such as forms, textboxes, grids, to name a few… Chapter 6 tackles built-in validation while providing examples and use cases on implementing custom validation that plugs into the MVC framework. Chapters 7 thru 13 discusses the latest on Membership, Ajax, Routing, NuGet and the ASP.Net Web API. Chapters 12 (Dependency Injection) and 13 (Unit Testing) demonstrate a big competitive advantage of MVC with its ease of test-ability and plug-ability. Chapters 14 and 15 targets the advanced developer showcasing how to extend MVC to customize and replace every piece in the framework.In conclusion, I strongly recommend Professional ASP.NET MVC 4 as an excellent read for both developers already using MVC as well as those getting started with the framework.   Many thanks to the Wiley/Wrox User Group Program for their support of our West Palm Beach Developers’ Group.  You can access my reviews of books I recently read: Professional ASP.NET Design Patterns Professional WCF 4.0 Inside Windows Communication Foundation Inside Microsoft SQL Server 2008 series

    Read the article

  • System broken after installing Gtk+-3.4.1 with broadway backend enabled

    - by Roman D. Boiko
    I am running Ubuntu 11.10 from VirtualBox. I installed Gtk+ 3.4.1 (latest stable release) from sources with X11 and broadway backends enabled. In order to do that, I also installed latest versions of glib, libffi, libtiff, libjped, gdk-pixbuf, and pango. Each of them was configured with default options. I.e., they were installed to /usr/local (at least, I see respective folders in /usr/local/include). After reboot and login (regardless which user), desktop is grey for about 30 sec, nothing is displayed. Then Nautilus starts, but nothing else (my locale is Ukrainian, but there is nothing important in text): . During boot, I can access command prompt as root, use dpkg, etc. But I don't know what to do. One idea is to reinstall Gtk+ and other libraries with prefix /usr or /usr/shared. I will try that, but it is quite time-consuming, so any ideas would be welcome. Reverting to earlier snapshot is still possible, but it is 6 days old and I would like to try to solve the problem.

    Read the article

< Previous Page | 175 176 177 178 179 180 181 182 183 184 185 186  | Next Page >