Search Results

Search found 36521 results on 1461 pages for 'aq advanced queue oracle support streams propagation schedule dblink troubleshoo'.

Page 707/1461 | < Previous Page | 703 704 705 706 707 708 709 710 711 712 713 714  | Next Page >

  • Neues Statement Of Direction veröffentlicht

    - by carstenczarski
    Das APEX Entwicklerteam hat ein neues Statement Of Direction (SOD) für die Version 5.0 veröffentlicht. Wie immer wird es die Verbesserung und Erweiterung vorhandener und die Einführung neuer Funktionen geben. Wie immer, ist das Statement Of Direction dazu gedacht, die Pläne und Ziele des APEX-Entwicklerteams mit der Community zu teilen. Insofern ist für APEX 5.0 unter anderem geplant ... ... modale Dialoge deklarativ bereitzustellen ... den Drag and Drop Layout Editor zurückzubringen ... HTML5 noch besser zu unterstützen ... mehr Varianten und Möglichkeiten für PDF-Ausgabe bereitzustellen ... spezielle User Interfaces für Tablets einzuführen ... Master / Detail / Detail Formulare zu ermöglichen ... mehrerer Interaktive Berichte auf einer Seite zu erlauben ... und vieles weitere mehr. Die APEX-Erfolgsgeschichte geht also weiter.

    Read the article

  • Silicon Valley Code Camp 2012 - Submit Your Talks

    - by arungupta
    Silicon Valley Code Camp follows three rules: Given by/for the community Always free Never occur during work hours I've spoken there at 2011, 2010, 2009, 2008, and 2007 and have again submitted a talk this year as well, and will submit more! Its one of the best organically grown code camps with the attendance constantly growing over the past 6 years. Here is a chart that shows how the number of conferences attendees that registered and attended and the sessions delivered over past 6 years. If you wonder why there is such a big gap between "registered" and "attended" that's because this event is FREE! Yes, 100% free. If you are in and around Silicon Valley then you have no reason to not participate/speak at SVCC. You have the opportunity to meet all the local JUG leaders and the community "rockstars" :-) Date: Oct 6/7, 2012 Venue: Foothill College, 12345, El Monte Road, Los Altos Hills, CA Submit today or register!

    Read the article

  • EPM 11.1.2 - Receive Anonymous Level Security token message in IE8 when trying to access Shared Services or Workspace URL

    - by Ahmed A
    If you get "Receive Anonymous Level Security token" message in IE8 when trying to access Shared Services or Workspace URL.Workaround:a. Go to Start > Run and enter dcomcnfgb. Expand Component Services, Expand Computers and right click on My Computer and select Propertiesc. Click on the Default Properties tab.  Change the Default Authentication Level to Connect.  Click apply and then OK.d. Launch the IE browser again and you will be able to access the URL.

    Read the article

  • Avoid overwriting of logs

    - by Koppar
    What usually happens is, the logs get filled up and begin getting overwritten, which makes them useless. To avoid it, use these 2 properties in the logging.properties file to suit your requirement: java.util.logging.FileHandler.count  = x (it is 1 by default, increase it to a bigger value) This number specifies the number of log files that can be created before overwriting starts. For instance, if you set it to 5, java0.log, java1.log ... java5.log will be created to log details so more information can be captured Likewise, java.util.logging.FileHandler.limit  would specify the size of each log.

    Read the article

  • Basic Puppet installation with Solaris 11.2 beta

    - by user13366125
    At the recent announcement we talked a lot about the Puppet integration. But how do you set it up? I want to show this in this blog entry. However this example i'm using is even useful in practice. Due to the extremely low overhead of zones i'm frequently seeing really large numbers of zones on a single system. Changing /etc/hosts or changing an SMF service property on 3 systems is not that hard. Doing it on a system with 500 zones is ... let say it diplomatic ... a job you give to someone you want to punish. Puppet can help in this case making of managing the configuration and to ease the distribution. You describe the changes you want to make in a file or set of file called manifest in the Puppet world and then roll them out to your servers, no matter if they are virtual or physical. A warning at first: Puppet is a really,really vast topic. This article is really basic and it doesn't goes more than just even toe's deep into the possibilities and capabilities of Puppet. It doesn't try to explain Puppet ... just how you get it up and running and do basic tests. There are many good books on Puppet. Please read one of them, and the concepts and the example will get much clearer immediately. (more)

    Read the article

  • PeopleSoft's Enterprise Financial Management 8.9

    Fred interviews Annette Melatti, Senior Director Financials Product Marketing and discusses the latest release and the value this release offers to customers including compliance, superior ownership experience, industry specific solution extensions, enhancements to the enterprise service automation solution and the introduction of the new asset lifecycle management solution.

    Read the article

  • Google Cloud DNS and DNSSEC?

    - by Joe Burnett
    Since Google Cloud DNS does not currently support the record types for DNSSEC, is there any way to begin implementation of DNSSEC using TXT records? If I were using Google Cloud DNS, which I am, and they currently only support record types which include SOA, A, AAAA, CNAME, MX, NS, SPF, SRV, PTR and TXT: am I able to do it while constricted to these record types? Or do I have to wait until support is hard-coded into the service? I am just wondering because I would really like to ensure absolute integrity for my company so that I only convey realness in it's purity. =D

    Read the article

  • YouTube: Realtime Graph Sharing on the NetBeans Platform

    - by Geertjan
    Yet another really cool movie by the Maltego team in South Africa, this time showing Visual Library widgets in their NetBeans Platform application shared in realtime between different users of the Maltego open source intelligence gathering and analytics software: What you see above is Maltego CaseFile. Below you find out more about it in the latest blog entry on the Maltego site: http://maltego.blogspot.be/2013/11/maltego-casefile-v2-released.html

    Read the article

  • GeoTools Demo Embedded in an Application Framework via Maven

    - by Geertjan
    GeoTools 8.4 was very recently released, according to its active blog, and to celebrate here's a starting point for working with GeoTools on the NetBeans Platform: The sources of the above are below, as a Maven project, so this project can be used in any IDE or command line: http://java.net/projects/nb-api-samples/sources/api-samples/show/versions/7.3/tutorials/geospatial/geotools/MyGeospatialSystem Though quite dated, the GeoTools NetBeans Quick Start is very helpful, especially since it used Maven too, but not the NetBeans Platform, unlike the above sample. From the point of view of NetBeans Platform developers, the GeoTools JMapPane class is very useful, providing the integration point between GeoTools and the rest of the NetBeans Platform application. Being integrated into the NetBeans Platform means that a host of standard features are now available to the GeoTools features, e.g., print functionality, which only requires a runtime dependency on the NetBeans Print API, together with the "print.printable" client property put into constructor of the TopComponent: By the way, I've spent some time now and again being confused about the difference between GeoTools and GeoToolkit. Here's an interesting starting point to beginning to understand the differences and history between them. Soon I'd like to have an example similar for the above for GeoToolkit.

    Read the article

  • RPi and Java Embedded GPIO: It all begins with hardware

    - by hinkmond
    So, you want to connect low-level peripherals (like blinky-blinky LEDs) to your Raspberry Pi and use Java Embedded technology to program it, do you? You sick foolish masochist. No, just kidding! That's awesome! You've come to the right place. I'll step you though it. And, as with many embedded projects, it all begins with hardware. So, the first thing to do is to get acquainted with the GPIO header on your RPi board. A "header" just means a thingy with a bunch of pins sticking up from it where you can connect wires. See the the red box outline in the photo. Now, there are many ways to connect to that header outlined by the red box in the photo (which the RPi folks call the P1 header). One way is to use a breakout kit like the one at Adafruit. But, we'll just use jumper wires in this example. So, to connect jumper wires to the header you need a map of where to connect which wire. That's why you need to study the pinout in the photo. That's your map for connecting wires. But, as with many things in life, it's not all that simple. RPi folks have made things a little tricky. There are two revisions of the P1 header pinout. One for older boards (RPi boards made before Sep 2012), which is called Revision 1. And, one for those fancy 512MB boards that were shipped after Sep 2012, which is called Revision 2. So, first make sure which board you have: either you have the Model A or B with 128MB or 256MB built before Sep 2012 and you need to look at the pinout for Rev. 1, or you have the Model B with 512MB and need to look at Rev. 2. That's all you need for now. More to come... Hinkmond

    Read the article

  • HTML Tidy in NetBeans IDE (Part 2)

    - by Geertjan
    This is what I was aiming for in the previous blog entry: What you can see above (especially if you click to enlarge it) is that I have HTML Tidy integrated into the NetBeans analyzer functionality, which is pluggable from 7.2 onwards. Well, if you set an implementation dependency on "Static Analysis Core", since it's not an official API yet. Also, the scopes of the analyzer functionality are not pluggable. That means you can 'only' set the analyzer's scope to one or more projects, one or more packages, or one or more files. Not one or more folders, which means you can't have a bunch off HTML files in a folder that you access via the Favorites window and then run the analyzer on that folder (or on multiple folders). Thus, to try out my new code, I had to put some HTML files into a package inside a Java application. Then I chose that package as the scope of the analyzer. Then I ran all the analyzers (i.e., standard NetBeans Java hints, FindBugs, as well as my HTML Tidy extension) on that package. The screenshot above is the result. Here's all the code for the above, which is a port of the Action code from the previous blog entry into a new Analyzer implementation: import java.io.IOException; import java.io.PrintWriter; import java.io.StringWriter; import java.util.ArrayList; import java.util.Collections; import java.util.List; import javax.swing.JComponent; import javax.swing.text.Document; import org.netbeans.api.fileinfo.NonRecursiveFolder; import org.netbeans.modules.analysis.spi.Analyzer; import org.netbeans.modules.analysis.spi.Analyzer.AnalyzerFactory; import org.netbeans.modules.analysis.spi.Analyzer.Context; import org.netbeans.modules.analysis.spi.Analyzer.CustomizerProvider; import org.netbeans.modules.analysis.spi.Analyzer.WarningDescription; import org.netbeans.spi.editor.hints.ErrorDescription; import org.netbeans.spi.editor.hints.ErrorDescriptionFactory; import org.netbeans.spi.editor.hints.Severity; import org.openide.cookies.EditorCookie; import org.openide.filesystems.FileObject; import org.openide.loaders.DataObject; import org.openide.util.Exceptions; import org.openide.util.lookup.ServiceProvider; import org.w3c.tidy.Tidy; public class TidyAnalyzer implements Analyzer {     private final Context ctx;     private TidyAnalyzer(Context cntxt) {         this.ctx = cntxt;     }     @Override     public Iterable<? extends ErrorDescription> analyze() {         List<ErrorDescription> result = new ArrayList<ErrorDescription>();         for (NonRecursiveFolder sr : ctx.getScope().getFolders()) {             FileObject folder = sr.getFolder();             for (FileObject fo : folder.getChildren()) {                 for (ErrorDescription ed : doRunHTMLTidy(fo)) {                     if (fo.getMIMEType().equals("text/html")) {                         result.add(ed);                     }                 }             }         }         return result;     }     private List<ErrorDescription> doRunHTMLTidy(FileObject sr) {         final List<ErrorDescription> result = new ArrayList<ErrorDescription>();         Tidy tidy = new Tidy();         StringWriter stringWriter = new StringWriter();         PrintWriter errorWriter = new PrintWriter(stringWriter);         tidy.setErrout(errorWriter);         try {             Document doc = DataObject.find(sr).getLookup().lookup(EditorCookie.class).openDocument();             tidy.parse(sr.getInputStream(), System.out);             String[] split = stringWriter.toString().split("\n");             for (String string : split) {                 //Bit of ugly string parsing coming up:                 if (string.startsWith("line")) {                     final int end = string.indexOf(" c");                     int lineNumber = Integer.parseInt(string.substring(0, end).replace("line ", ""));                     string = string.substring(string.indexOf(": ")).replace(":", "");                     result.add(ErrorDescriptionFactory.createErrorDescription(                             Severity.WARNING,                             string,                             doc,                             lineNumber));                 }             }         } catch (IOException ex) {             Exceptions.printStackTrace(ex);         }         return result;     }     @Override     public boolean cancel() {         return true;     }     @ServiceProvider(service = AnalyzerFactory.class)     public static final class MyAnalyzerFactory extends AnalyzerFactory {         public MyAnalyzerFactory() {             super("htmltidy", "HTML Tidy", "org/jtidy/format_misc.gif");         }         public Iterable<? extends WarningDescription> getWarnings() {             return Collections.EMPTY_LIST;         }         @Override         public <D, C extends JComponent> CustomizerProvider<D, C> getCustomizerProvider() {             return null;         }         @Override         public Analyzer createAnalyzer(Context cntxt) {             return new TidyAnalyzer(cntxt);         }     } } The above only works on packages, not on projects and not on individual files.

    Read the article

  • Need a Quick Sure Method to Produce a Formatted Explain Plan? This will help!

    - by user702295
    Please use the following on the production machine to get formatted explain plan and sql trace using the SLOW sql (e.g. 'T_COMB_LIST.COMB_ID = 216') or any other value that takes longer: -- Open new session is SQL*Plus */ -- Make sure you are using updated PLAN_TABLE -- This can be done by dropping it and recreate it by running: -- SQL> @?/rdbms/admin/utlxplan.sql) set lines 1000 set pages 1000 spool xplan_1.txt EXPLAIN PLAN FOR <<<<Replace this line with exactly the same query you used above. Force hard parse by modifying the case of a character>>>> @?/rdbms/admin/utlxplp spool off EXIT --Open a second session is SQL*Plus ALTER SESSION SET max_dump_file_size = unlimited; ALTER SESSION SET tracefile_identifier = '10046'; ALTER SESSION SET statistics_level = ALL; ALTER SESSION SET events '10046 trace name context forever, level 12'; <<<<Replace this line with exactly the same query you used above. Force hard parse by modifying the case of a character>>>> select 'verify cursor closed' from dual; ALTER SYSTEM SET EVENTS '10046 trace name context off'; EXIT Make sure spooled file is formatted properly and that the 10046 trace has relevant explain plan in it.  Please Upload both files (10046 trace is generated in udump). Need instructions to find udump?   sqlplus "/ as sysdba" show parameters dump_dest This will show you bdump, cdump and udump locations.

    Read the article

  • Java @Contented annotation to help reduce false sharing

    - by Dave
    See this posting by Aleksey Shipilev for details -- @Contended is something we've wanted for a long time. The JVM provides automatic layout and placement of fields. Usually it'll (a) sort fields by descending size to improve footprint, and (b) pack reference fields so the garbage collector can process a contiguous run of reference fields when tracing. @Contended gives the program a way to provide more explicit guidance with respect to concurrency and false sharing. Using this facility we can sequester hot frequently written shared fields away from other mostly read-only or cold fields. The simple rule is that read-sharing is cheap, and write-sharing is very expensive. We can also pack fields together that tend to be written together by the same thread at about the same time. More generally, we're trying to influence relative field placement to minimize coherency misses. Fields that are accessed closely together in time should be placed proximally in space to promote cache locality. That is, temporal locality should condition spatial locality. Fields accessed together in time should be nearby in space. That having been said, we have to be careful to avoid false sharing and excessive invalidation from coherence traffic. As such, we try to cluster or otherwise sequester fields that tend to written at approximately the same time by the same thread onto the same cache line. Note that there's a tension at play: if we try too hard to minimize single-threaded capacity misses then we can end up with excessive coherency misses running in a parallel environment. Theres no single optimal layout for both single-thread and multithreaded environments. And the ideal layout problem itself is NP-hard. Ideally, a JVM would employ hardware monitoring facilities to detect sharing behavior and change the layout on the fly. That's a bit difficult as we don't yet have the right plumbing to provide efficient and expedient information to the JVM. Hint: we need to disintermediate the OS and hypervisor. Another challenge is that raw field offsets are used in the unsafe facility, so we'd need to address that issue, possibly with an extra level of indirection. Finally, I'd like to be able to pack final fields together as well, as those are known to be read-only.

    Read the article

  • Novell: 20 chances to reinvent itself

    <b>The Open Road:</b> "Novell, once the king of the software world, is like that. Over the years it has built up a broad portfolio of software (with associated revenue streams) in repeated attempts to regain its glory days. That portfolio now stifles its ability to focus on other areas with the most promise."

    Read the article

  • Students Can Discover JavaOne for Free

    - by Tori Wieldt
    Students can get a FREE Discover Pass for JavaOne to learn a bit about Java and network with experienced Java professionals. To be eligible, students must be • At least 18 years-old • Taking a minimum of 6 units • Enrolled in a nonprofit institution of learning Students will get all the benefits of a Discover attendee, which includes: JavaOne and OpenWorld keynotes; Exhibition Halls; and, space permitting, students can also attend JavaOne Technical and BOF (Birds-of-a-Feather) sessions, and HOLs (Hands-on Labs). Don't miss out on this opportunity for a real education with a FREE Discover Pass!

    Read the article

  • Excellent JAX-RS 2 Article on JavaLobby

    - by reza_rahman
    JAX-RS 2 is a key part of Java EE 7. It is currently in early draft stage and this is a great time to provide feedback. With this goal in mind, well-respected Java EE veteran Bill Burke of JBoss wrote an excellent article on DZone/JavaLobby overviewing what's in JAX-RS 2 so far. He discusses: The client API Asynchronous processing Filters and entity interceptors The full article is posted here. Enjoy!

    Read the article

  • Integrating Amazon S3 in Java via NetBeans IDE

    - by Geertjan
    To continue from yesterday, let's set up a scenario that enables us to make use of this drag/drop service in NetBeans IDE: The above service is applicable to Amazon S3, an Amazon storage provider that is typically used to store large binary files. In Amazon S3, every object stored is contained in a bucket. Buckets partition the namespace of objects stored in Amazon S3. More on buckets here. Let's use the tools in NetBeans IDE to create a Java application that accesses our Amazon S3 buckets. Create a Java application named "AmazonBuckets" with a main class named "AmazonBuckets". Open the main class and then drag the above service into the main method of the class. Now, NetBeans IDE will create all the other classes and the properties file that you see in the screenshot below. The first thing to do is to open the properties file above and enter the access key and secret: access_key=SOMETHINGsecret=SOMETHINGELSE Now you're all set up. Make sure to, of course, actually have some buckets available: Then rewrite the Java class to parse the XML that is returned via the generated code: package amazonbuckets;import java.io.ByteArrayInputStream;import java.io.IOException;import javax.xml.parsers.DocumentBuilder;import javax.xml.parsers.DocumentBuilderFactory;import javax.xml.parsers.ParserConfigurationException;import org.netbeans.saas.amazon.AmazonS3Service;import org.netbeans.saas.RestResponse;import org.w3c.dom.DOMException;import org.w3c.dom.Document;import org.w3c.dom.Node;import org.w3c.dom.NodeList;import org.xml.sax.InputSource;import org.xml.sax.SAXException;public class AmazonBuckets {    public static void main(String[] args) {        try {            RestResponse result = AmazonS3Service.getBuckets();            String dataAsString = result.getDataAsString();            DocumentBuilderFactory dbFactory = DocumentBuilderFactory.newInstance();            DocumentBuilder dBuilder = dbFactory.newDocumentBuilder();            Document doc = dBuilder.parse(                    new InputSource(new ByteArrayInputStream(dataAsString.getBytes("utf-8"))));            NodeList bucketList = doc.getElementsByTagName("Bucket");            for (int i = 0; i < bucketList.getLength(); i++) {                Node node = bucketList.item(i);                System.out.println("Bucket Name: " + node.getFirstChild().getTextContent());            }        } catch (IOException | ParserConfigurationException | SAXException | DOMException ex) {        }    }}That's all. This is simpler to setup than the scenario described yesterday. Also notice that there are other Amazon S3 services you can interact with from your Java code, again after generating a heap of code after drag/drop into a Java source file: I tried the above, e.g., I created a new Amazon S3 bucket after dragging "createBucket", adding my credentials in the properties file, and then running the code that had been created. I.e., without adding a single line of code I was able to programmatically create new buckets. The above outlines a handy set of tools and techniques to use if you want to let your users store and access data in Amazon S3 buckets directly from the application you've created for them.

    Read the article

  • Goodbye, Estonian Kroon. Hello, euro!

    - by Theresa Hickman
    Happy New Year! As of the stroke of midnight on 1/1/11, Estonia became a member of the Euro zone. Keeping consistent with the #1 theme, they're the first former Soviet bloc country to do so. With a population of only 1.34M (wouldn't it have been ironic if their population was 1.11M?), Estonia is one of the least-populated countries in Europe to join the European Union. Its currency, the Estonian Kroon was converted at its fixed rate of 15.6466 Kroons to the Euro. Some of its neighbors, such as Lithuania and Latvia, also hope to join the Euro zone soon, but who knows now that there is so much turmoil.

    Read the article

  • Adding Exchange Multi Tenant to an existing Exchange Network

    - by TiernanO
    I currently have an Exchange 2010 SP1 server in house, and due to some changes, it looks like i will need Multi Tenant support for a few extra domain names. Looking at the documentation i have found so far, it only mentions multi tenant support when upgrading from 2010 RTM - SP1, and not what you do if you have already got 2010 SP1 installed. So, from what i can gather, i have a few options: Install a new Exchange server with Multi Tenant support and migrate DBs over Back everything up and start again Something else... Any suggestions would be greatly appreciated... Thanks.

    Read the article

  • Procurement Search Helpers

    - by Oracle_EBS
    To access all our Procurement Search Helpers see Doc ID 1391332.2 our Procurement Information Center Index, then click on Purchasing under Procurement Suite. Here you will see links to our Procurement Search Helpers: Search Helpers provide a collection of solutions based on the symptoms you enter. Try these before logging a Service Request.  If you are not sure how to use Search Helpers, click on 'About this Note' in each document. Current Procurement Search Helpers: Doc ID Search Helper Title 1361856.1  EBS : Purchase Order and Requisition Approval Search Helper (In Process or Incomplete Status) 1377764.1 EBS : PO Output for Communication / Supplier Notification Issues Search Helper 1364360.1 EBS : Requisition To Purchase Order Search Helper 1369663.1 EBS : Purchase Document Open Interface and API Search Helper 1391970.1 EBS : Search Helper for RVTII-060 Errors in Receiving 1394392.1 EBS : Purchasing Buyer Work Center Search Helper 1470034.1 EBS : Document Control Issues Search Helper

    Read the article

  • Integrating Amazon EC2 in Java via NetBeans IDE

    - by Geertjan
    Next, having looked at Amazon Associates services and Amazon S3, let's take a look at Amazon EC2, the elastic compute cloud which provides remote computing services. I started by launching an instance of Ubuntu Server 14.04 on Amazon EC2, which looks a bit like this in the on-line AWS Management Console, though I whitened out most of the details: Now that I have at least one running instance available on Amazon EC2, it makes sense to use the services that are integrated into NetBeans IDE:  I created a new application with one class, named "AmazonEC2Demo". Then I dragged the "describeInstances" service that you see above, with the mouse, into the class. Then the IDE automatically created all the other files you see below, i.e., 4 Java classes and one properties file: In the properties file, register the access ID and secret keys. These are read by the other generated Java classes. Signing and authentication are done automatically by the code that is generated, i.e., there's nothing generic you need to do and you can immediately begin working on your domain-specific code. Finally, you're now able to rewrite the code in "AmazonEC2Demo" to connect to Amazon EC2 and obtain information about your running instance: public class AmazonEC2Demo { public static void main(String[] args) { String instanceId1 = "i-something"; RestResponse result; try { result = AmazonEC2Service.describeInstances(instanceId1); System.out.println(result.getDataAsString()); } catch (IOException ex) { Logger.getLogger(AmazonEC2Demo.class.getName()).log(Level.SEVERE, null, ex); } } } From the above, you'll receive a chunk of XML with data about the running instance, it's name, status, dates, etc. In other words, you're now ready to integrate Amazon EC2 features directly into the applications you're writing, without very much work to get started. Within about 5 minutes, you're working on your business logic, rather than on the generic code that anyone needs when integrating with Amazon EC2.

    Read the article

  • deny-uncovered-http-methods in Servlet 3.1

    - by reza_rahman
    Servlet 3.1 is a relatively minor release included in Java EE 7. However, the Java EE foundational API still contains some very important changes. One such set of features are the security enhancements done in Servlet 3.1 such as the new deny-uncovered-http-methods option. Servlet 3.1 co-spec lead Shing Wai Chan outlines the use case for the feature and shows you how to use it in a recent code example driven post. You can also check out the official specification yourself or try things out with the newly released Java EE 7 SDK.

    Read the article

  • Win Server 2003 - Task Scheduler - Tasks with GUI and Services

    - by august_month
    I need to run excel macro daily. I scheduled it with Windows Scheduler and it worked fine until I had to change my password. I wonder if it's possible to have a task scheduled without a password? As alternative we have third party scheduling software, but this software cannot launch excel. The tech support said that since excel has gui and scheduling software runs as service with "Allow to interact to Desktop" disabled, it cannot launch excel. Also tech support mentioned that "Allow to interact to Desktop" is not supported as of Vista. I totally trust tech support guy, I just need a work around that would make my network administrator and me happy. Regards.

    Read the article

< Previous Page | 703 704 705 706 707 708 709 710 711 712 713 714  | Next Page >