Search Results

Search found 41848 results on 1674 pages for 'type signature'.

Page 760/1674 | < Previous Page | 756 757 758 759 760 761 762 763 764 765 766 767  | Next Page >

  • Custom Configuration Section Handlers

    Most .NET developers who need to store something in configuration tend to use appSettings for this purpose, in my experience.  More recently, the framework itself has helped things by adding the <connectionStrings /> section so at least these are in their own section and not adding to the appSettings clutter that pollutes most apps.  I recommend avoiding appSettings for several reasons.  In addition to those listed there, I would add that strong typing and validation are additional reasons to go the custom configuration section route. For my ASP.NET Tips and Tricks talk, I use the following example, which is a simple DemoSettings class that includes two fields.  The first is an integer representing how many attendees there are present for the talk, and the second is the title of the talk.  The setup in web.config is as follows: <configSections> <section name="DemoSettings" type="ASPNETTipsAndTricks.Code.DemoSettings" /> </configSections>   <DemoSettings sessionAttendees="100" title="ASP.NET Tips and Tricks DevConnections Spring 2010" /> Referencing the values in code is strongly typed and straightforward.  Here I have a page that exposes two properties which internally get their values from the configuration section handler: public partial class CustomConfig1 : System.Web.UI.Page { public string SessionTitle { get { return DemoSettings.Settings.Title; } } public int SessionAttendees { get { return DemoSettings.Settings.SessionAttendees; } } } Note that the settings are only read from the config file once after that they are cached so there is no need to be concerned about excessive file access. Now weve seen how to set it up on the config file and how to refer to the settings in code.  All that remains is to see the file itself: public class DemoSettings : ConfigurationSection { private static DemoSettings settings = ConfigurationManager.GetSection("DemoSettings") as DemoSettings; public static DemoSettings Settings{ get { return settings;} }   [ConfigurationProperty("sessionAttendees" , DefaultValue = 200 , IsRequired = false)] [IntegerValidator(MinValue = 1 , MaxValue = 10000)] public int SessionAttendees { get { return (int)this["sessionAttendees"]; } set { this["sessionAttendees"] = value; } }   [ConfigurationProperty("title" , IsRequired = true)] [StringValidator(InvalidCharacters = "~!@#$%^&*()[]{}/;\"|\\")] public string Title { get { return (string)this["title"]; } set { this["title"] = value; }   } } The class is pretty straightforward, but there are some important components to note.  First, it must inherit from System.Configuration.ConfigurationSection.  Next, as a convention I like to have a static settings member that is responsible for pulling out the section when the class is first referenced, and further to expose this collection via a static readonly property, Settings.  Note that the types of both of these are the type of my class, DemoSettings. The properties of the class, SessionAttendees and Title, should map to the attributes of the config element in the XML file.  The [ConfigurationProperty] attribute allows you to map the attribute name to the property name (thus using both XML standard naming conventions and C# naming conventions).  In addition, you can specify a default value to use if nothing is specified in the config file, and whether or not the setting must be provided (IsRequired).  If it is required, then it doesnt make sense to include a default value. Beyond defaults and required, you can specify more advanced validation rules for the configuration values using additional C# attributes, such as [IntegerValidator] and [StringValidator].  Using these, you can declaratively specify that your configuration values be in a given range, or omit certain forbidden characters, for instance.  Of course you can write your own custom validation attributes, and there are others specified in System.Configuration. Individual sections can also be loaded from separate files, using syntax like this: <DemoSettings configSource="demosettings.config" /> Summary Using a custom configuration section handler is not hard.  If your application or component requires configuration, I recommend creating a custom configuration handler dedicated to your app or component.  Doing so will reduce the clutter in appSettings, will provide you with strong typing and validation, and will make it much easier for other developers or system administrators to locate and understand the various configuration values that are necessary for a given application. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Ubuntu Unable to format 16Gb Usb Disk

    - by akshay.is.gr8
    Whenever i try to format my 16 GB usb disk using gparted it does to formating and when it refreshes then show unknown. tried disk utility as well. disk utility was able to format it into FAT but files vanish web the disk is removed and attached again. edit: the disk format completes every time but when using gparted it immediately show Unknown type file system and disk utility show FAT but when the Disk is unplugged and then connected the files are not there. either way it is unusable.

    Read the article

  • Export Foxbase database tables to csv

    - by RKS
    I have no experience with Foxbase whatsoever and I'm used to working with MySQL via phpmyadmin or interfaces like that. My company has a third party database we're trying to move away from, but we have no support from the company. The database is on our servers, but in a foxbase format. What kind tools do I need to convert these into other formats, or is there any type of admin UI I can tell them to use to export to csv or anything like that? Basically I'm asking how to export the foxbase tables to csv. Sorry if this question isn't clear or you need more information. I will edit with anything else you need.

    Read the article

  • Application Composer: Exposing Your Customizations in BI Analytics and Reporting

    - by Richard Bingham
    Introduction This article explains in simple terms how to ensure the customizations and extensions you have made to your Fusion Applications are available for use in reporting and analytics. It also includes four embedded demo videos from our YouTube channel (if they don't appear check the browser address bar for a blocking shield icon). If you are new to Business Intelligence consider first reviewing our getting started article, and you can read more about the topic of custom subject areas in the documentation book Extending Sales. There are essentially four sections to this post. First we look at how custom fields added to standard objects are made available for reporting. Secondly we look at creating custom subject areas on the standard objects. Next we consider reporting on custom objects, starting with simple standalone objects, then child custom objects, and finally custom objects with relationships. Finally this article reviews how flexfields are exposed for reporting. Whilst this article applies to both Cloud/SaaS and on-premises deployments, if you are an on-premises developer then you can also use the BI Administration Tool to customize your BI metadata repository (the RPD) and create new subject areas. Whilst this is not covered here you can read more in Chapter 8 of the Extensibility Guide for Developers. Custom Fields on Standard Objects If you add a custom field to your standard object then it's likely you'll want to include it in your reports. This is very simple, since all new fields are instantly available in the "[objectName] Extension" folder in existing subject areas. The following two minute video demonstrates this. Custom Subject Areas for Standard Objects You can create your own subject areas for use in analytics and reporting via Application Composer. An example use-case could be to simplify the seeded subject areas, since they sometimes contain complex data fields and internal values that could confuse business users. One thing to note is that you cannot create subject areas in a sandbox, as it is not supported by BI, so once your custom object is tested and complete you'll need to publish the sandbox before moving forwards. The subject area creation processes is essentially two-fold. Once the request is submitted the ADF artifacts are generated, then secondly the related metadata is sent to the BI presentation server API's to make the updates there. One thing to note is that this second step may take up to ten minutes to complete. Once finished the status of the custom subject area request should show as 'OK' and it is then ready for use. Within the creation processes wizard-like steps there are three concepts worth highlighting: Date Flattening - this feature permits the roll up of reports at various date levels, such as data by week, month, quarter, or year. You simply check the box to enable it for that date field. Measures - these are your own functions that you can build into the custom subject area. They are related to the field data type and include min-max for dates, and sum(), avg(), and count() for  numeric fields. Implicit Facts - used to make the BI metadata join between your object fields and the calculated measure fields. The advice is to choose the most frequently used measure to ensure consistency. This video shows a simple example, where a simplified subject area is created for the customer 'Contact' standard object, picking just a few fields upon which users can then create reports. Custom Objects Custom subject areas support three types of custom objects. First is a simple standalone custom object and for which the same process mentioned above applies. The next is a custom child object created on a standard object parent, and finally a custom object that is related to a parent object - usually through a dynamic choice list. Whilst the steps in each of these last two are mostly the same, there are differences in the way you choose the objects and their fields. This is illustrated in the videos below.The first video shows the process for creating a custom subject area for a simple standalone custom object. This second video demonstrates how to create custom subject areas for custom objects that are of parent:child type, as well as those those with dynamic-choice-list relationships. &lt;span id=&quot;XinhaEditingPostion&quot;&gt;&lt;/span&gt; Flexfields Dynamic and Extensible Flexfields satisfy a similar requirement as custom fields (for Application Composer), with flexfields common across the Fusion Financials, Supply Chain and Procurement, and HCM applications. The basic principle is when you enable and configure your flexfields, in the edit page under each segment region (for both global and context segments) there is a BI Enabled check box. Once this is checked and you've completed your configuration, you run the Scheduled Process job named 'Import Oracle Fusion Data Extensions for Transactional Business Intelligence' to generate and migrate the related BI artifacts and data. This applies for dynamic, key, and extensible flexfields. Of course there is more to consider in terms of how you wish your flexfields to be implemented and exposed in your reports, and details are given in Chapter 4 of the Extending Applications guide.

    Read the article

  • How can I associate .doc files to MS Word 2010 using the same .desktop file as launcher?

    - by nastys
    I'm trying to associate .doc and .docx files to MS Word 2010 using the same .desktop file as Unity dash and launcher, so I can use the Word icon in launcher. I tried: [Desktop Entry] Name=Microsoft Word 2010 Exec=env WINEPREFIX="/home/nastys/.mso2010" wine "C:/Program Files/Microsoft Office/Office14/WINWORD.exe" %f Type=Application StartupNotify=true Comment=Create and edit professional-looking documents such as letters, papers, reports, and booklets by using Microsoft Word. Icon=29F5_WINWORD.0 StartupWMClass=WINWORD.EXE MimeType=application/msword; application/vnd.openxmlformats-officedocument.wordprocessingml.document; Using this .desktop file I can launch Word with its icon in Unity launcher, but if I associate .doc files to the same file Word will launch, but it won't open the .doc file. If I associate .doc files to any .desktop file generated by Wine it will launch Word, but it will use Wine icon.

    Read the article

  • Unable to mount hard disk

    - by user101522
    I am unable to mount hard disk and got this message: Unable to mount 158 GB Filesystem Error mounting: mount: wrong fs type, bad option, bad superblock on /dev/sda1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so From the terminal, I tried syslog - try: No command 'syslog' found, did you mean: Command 'dsyslog' from package 'dsyslog' (universe) Command 'syslogd' from package 'sysklogd' (universe) Command 'syslogd' from package 'inetutils-syslogd' (universe) Command 'syslogd' from package 'busybox-syslogd' (universe) syslog: command not found Also tried dmesg | tail: [ 971.390588] sd 0:0:0:0: [sda] CDB: Read(10): 28 00 12 62 30 80 00 00 40 00 [ 971.390600] end_request: I/O error, dev sda, sector 308424832 [ 971.390605] Read-error on swap-device (8:0:308424840) [ 971.390608] Read-error on swap-device (8:0:308424848) [ 971.390617] Read-error on swap-device (8:0:308424856) [ 971.390620] Read-error on swap-device (8:0:308424864) [ 971.390623] Read-error on swap-device (8:0:308424872) [ 971.390626] Read-error on swap-device (8:0:308424880) [ 971.390629] Read-error on swap-device (8:0:308424888) [ 971.390632] Read-error on swap-device (8:0:308424896) It was fine before I tried to re-install 12.04 from the live CD (which failed due to the disk problem).

    Read the article

  • Getting My Head Around Immutability

    - by Michael Mangold
    I'm new to object-oriented programming, and one concept that has been taking me a while to grasp is immutability. I think the light bulb went off last night but I want to verify: When I come across statements that an immutable object cannot be changed, I'm puzzled because I can, for instance, do the following: NSString *myName = @"Bob"; myName = @"Mike"; There, I just changed myName, of immutable type NSString. My problem is that the word, "object" can refer to the physical object in memory, or the abstraction, "myName." The former definition applies to the concept of immutability. As for the variable, a more clear (to me) definition of immutability is that the value of an immutable object can only be changed by also changing its location in memory, i.e. its reference (also known as its pointer). Is this correct, or am I still lost in the woods?

    Read the article

  • Out of memory on MATLAB

    - by Eric Sánchez
    I'm trying to run a script on matlab_2011a, which calculate same means for a climatology of 50 years. When I started to run the script for all the years it worked fine until the iteration 20th, and then appeared the message: Out of memory. Type HELP MEMORY for your options. Then I used clear v1 v2 v3 ... to clear all the variables inside the function, also i used clear train because i saw it in another forum, and these with the modifications or not, I run again the script (since the 21th iteration), and the result is the same message, but curiously sometimes it run a year and then stop. Any ideas about solving this problem?, what I have to clean to run correctly? (in this matlab version there's not the command memory which maybe could help me).

    Read the article

  • Microsoft and Joyent Announce Node.js Windows Port

    With the Node.js command line tool, developers can type ?node my_app.js.' to run JavaScript programs. It gives developers a JavaScript application programming interface (API) supplies access for the network and file system as well. One instance where Node.js often comes in handy is in the creation of scalable networked programs that emphasize high concurrency and low response times. Developers who wish to use Node.js use on Windows at this time must do so running a virtual machine with Linux. Claudia Caldato, Principal Program Manager of Microsoft's Interoperability Strategy Team, offered...

    Read the article

  • Point[] and Tri not "could not be found"

    - by Craig Dannehl
    Hi I'm trying to learn how to load a .obj file using OpenTK in windows Forms. I have seen a lot of examples out there, but I do see almost everyone uses List, and Point[]. Code example show these highlighted like there IDE know what these are; for example List<Tri> tris = new List<Tri>(); but mine just returns "The type or namespace name 'Tri' could not be found" is there an include I need to add or a using I am missing. Currently have this using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; using System.Drawing; using OpenTK; using OpenTK.Graphics; using OpenTK.Graphics.OpenGL;

    Read the article

  • Microsoft travaille sur Office Talk, un Twitter pour les entreprises : le projet est encore dans les

    Microsoft étudie un Twitter pour les entreprises Le projet baptisé Office Talk est encore dans les Office Labs Le projet n'en est encore qu'à ses tous débuts. Microsoft parle d'ailleurs toujours de "recherche" sur le site de ses Office Labs. Mais il semblerait bien que Redmond voit plus loin et ait réellement l'intention de lancer un outil de micro-blogging (de type Twitter) destiné aux entreprises et aux utilisateurs professionnels. Baptisé OfficeTalk, "il permettrait aux employés de poster leurs réflexions, leurs activités et des renseignements potentiellement utiles à tous ceux qui pourraient être intéressés par ces informations".

    Read the article

  • Un chercheur utilise le cloud d'Amazon pour hacker les réseaux protégés, en cassant le chiffrage WPA-PSK par force brute

    Un chercheur utilise le cloud d'Amazon pour hacker les réseaux protégés, en cassant le chiffrage WPA-PSK par force brute Un chercheur en sécurité informatique vient de déclarer avoir identifié une manière simple, rapide et économique d'exploiter une faille dans les Amazon Web Services. Il s'agit de Thomas Roth, consultant allemand, qui affirme pouvoir s'infiltrer dans des réseaux protégés. Comment ? Grâce à un programme spécifique, qu'il a écrit et qui tourne sur les ordinateurs basés sur le Cloud d'Amazon. Ce dernier lance alors des attaques par force brute et teste pas loin de 400.000 mots de passes différents par seconde via les machines d'Amazon. La technique s'en prends à un type précis et très co...

    Read the article

  • Access secondary hard disk from Virtual Machine

    - by Frank V
    I have a fairly specific question. I had Ubuntu on my Laptop (for years). For a variety of reasons, I've had to switch to Windows but the computer has two hard drives. The main drive was reformatted and I've installed windows. The second hard drive still has the Linux system disk format (not sure on type). Obviously, windows can't access it but can I access it from a Virtual machine (VirtualBox) or will I need to load up a Live-Session to access / move the contents? Edit: If this is possible, how would one proceed to mount the disk?

    Read the article

  • How can I share my python scripts with my less python-savvy business person partner?

    - by Alex
    I'm taking financial mathematics as an elective, and I'm working with real life finance industry worker type people. It's actually kind of fun. When I pulled out a macbook at one of our meetings, I had four lifelong windows users look at me like I had three heads. Anyway, I'm helping with design and simulation of our trading strategy, and I wrote a little thing using matplotlib to visualize historical stock data. However, these guys don't know how to use git, or install python, or deal with path-related package management things. I need to be able to send my scripts to them to use, and I need to do it with absolutely minimal effort on their part. I was thinking something on the lines of py2exe, but I'd like to hear some advice before I go ahead.

    Read the article

  • Building an OpenStack Cloud for Solaris Engineering, Part 1

    - by Dave Miner
    One of the signature features of the recently-released Solaris 11.2 is the OpenStack cloud computing platform.  Over on the Solaris OpenStack blog the development team is publishing lots of details about our version of OpenStack Havana as well as some tips on specific features, and I highly recommend reading those to get a feel for how we've leveraged Solaris's features to build a top-notch cloud platform.  In this and some subsequent posts I'm going to look at it from a different perspective, which is that of the enterprise administrator deploying an OpenStack cloud.  But this won't be just a theoretical perspective: I've spent the past several months putting together a deployment of OpenStack for use by the Solaris engineering organization, and now that it's in production we'll share how we built it and what we've learned so far.In the Solaris engineering organization we've long had dedicated lab systems dispersed among our various sites and a home-grown reservation tool for developers to reserve those systems; various teams also have private systems for specific testing purposes.  But as a developer, it can still be difficult to find systems you need, especially since most Solaris changes require testing on both SPARC and x86 systems before they can be integrated.  We've added virtual resources over the years as well in the form of LDOMs and zones (both traditional non-global zones and the new kernel zones).  Fundamentally, though, these were all still deployed in the same model: our overworked lab administrators set up pre-configured resources and we then reserve them.  Sounds like pretty much every traditional IT shop, right?  Which means that there's a lot of opportunity for efficiencies from greater use of virtualization and the self-service style of cloud computing.  As we were well into development of OpenStack on Solaris, I was recruited to figure out how we could deploy it to both provide more (and more efficient) development and test resources for the organization as well as a test environment for Solaris OpenStack.At this point, let's acknowledge one fact: deploying OpenStack is hard.  It's a very complex piece of software that makes use of sophisticated networking features and runs as a ton of service daemons with myriad configuration files.  The web UI, Horizon, doesn't often do a good job of providing detailed errors.  Even the command-line clients are not as transparent as you'd like, though at least you can turn on verbose and debug messaging and often get some clues as to what to look for, though it helps if you're good at reading JSON structure dumps.  I'd already learned all of this in doing a single-system Grizzly-on-Linux deployment for the development team to reference when they were getting started so I at least came to this job with some appreciation for what I was taking on.  The good news is that both we and the community have done a lot to make deployment much easier in the last year; probably the easiest approach is to download the OpenStack Unified Archive from OTN to get your hands on a single-system demonstration environment.  I highly recommend getting started with something like it to get some understanding of OpenStack before you embark on a more complex deployment.  For some situations, it may in fact be all you ever need.  If so, you don't need to read the rest of this series of posts!In the Solaris engineering case, we need a lot more horsepower than a single-system cloud can provide.  We need to support both SPARC and x86 VM's, and we have hundreds of developers so we want to be able to scale to support thousands of VM's, though we're going to build to that scale over time, not immediately.  We also want to be able to test both Solaris 11 updates and a release such as Solaris 12 that's under development so that we can work out any upgrade issues before release.  One thing we don't have is a requirement for extremely high availability, at least at this point.  We surely don't want a lot of down time, but we can tolerate scheduled outages and brief (as in an hour or so) unscheduled ones.  Thus I didn't need to spend effort on trying to get high availability everywhere.The diagram below shows our initial deployment design.  We're using six systems, most of which are x86 because we had more of those immediately available.  All of those systems reside on a management VLAN and are connected with a two-way link aggregation of 1 Gb links (we don't yet have 10 Gb switching infrastructure in place, but we'll get there).  A separate VLAN provides "public" (as in connected to the rest of Oracle's internal network) addresses, while we use VxLANs for the tenant networks. One system is more or less the control node, providing the MySQL database, RabbitMQ, Keystone, and the Nova API and scheduler as well as the Horizon console.  We're curious how this will perform and I anticipate eventually splitting at least the database off to another node to help simplify upgrades, but at our present scale this works.I had a couple of systems with lots of disk space, one of which was already configured as the Automated Installation server for the lab, so it's just providing the Glance image repository for OpenStack.  The other node with lots of disks provides Cinder block storage service; we also have a ZFS Storage Appliance that will help back-end Cinder in the near future, I just haven't had time to get it configured in yet.There's a separate system for Neutron, which is our Elastic Virtual Switch controller and handles the routing and NAT for the guests.  We don't have any need for firewalling in this deployment so we're not doing so.  We presently have only two tenants defined, one for the Solaris organization that's funding this cloud, and a separate tenant for other Oracle organizations that would like to try out OpenStack on Solaris.  Each tenant has one VxLAN defined initially, but we can of course add more.  Right now we have just a single /24 network for the floating IP's, once we get demand up to where we need more then we'll add them.Finally, we have started with just two compute nodes; one is an x86 system, the other is an LDOM on a SPARC T5-2.  We'll be adding more when demand reaches the level where we need them, but as we're still ramping up the user base it's less work to manage fewer nodes until then.My next post will delve into the details of building this OpenStack cloud's infrastructure, including how we're using various Solaris features such as Automated Installation, IPS packaging, SMF, and Puppet to deploy and manage the nodes.  After that we'll get into the specifics of configuring and running OpenStack itself.

    Read the article

  • Database Insider May Edition - Now Available

    - by jenny.gelhausen
    The May Edition of the Database Insider newsletter is now available. This edition covers customer successes with Oracle Database, upcoming events not to be missed as well as headlining news articles: Oracle Application Express 4.0 Will Rock Kaleidoscope 2010 Fast-track to Oracle Database 11g with Oracle Consulting Save 10% on Oracle Database Management Packs Check it out here. var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Why does wifi hotspot doesn't work correctly?

    - by Habi
    I have created the wifi hotspot. In windows, it used to display wifi signal on my nokia C3 but after creating wifi hotspot from ubuntu, I am facing certain problem. Nokia C3 doesn't even gets wifi signal. But wifi signal appears in my samsung mobile GTS3-850 and I can access wifi if I put wifi security option to "none" On the other hand, if I create password then Nomatters, if I type right password or not, it doesn't connect to that wifi. Note : I use Wimax to access internet.

    Read the article

  • Using Substring() in XML FLOWR Queries

    - by Jonathan Kehayias
    Tonight I was monitoring the #sqlhelp hashtag on Twitter for a response to a question I asked when Randy Knight ( Twitter ) asked a question about using SUBSTRING in FLOWR statements with XML. #sqlhelp Is there a way to do a SQL Type "LIKE" or "SUBSTRING" in the where clause of FLWOR statement? Need to evaluate just first n chars. By the time I posted a response, Randy had figured out how to use the contains() function to solve his problem, but I am going to blog this because...(read more)

    Read the article

  • dead keys not working in java app

    - by jippie
    The Arduino IDE is based on a java application called processing. While typing text into processing, it refuses to accept any of the characters under dead keys like " ' ^. As a work around I: Open another window (usually my browser or mail client); I type the character I need; Select and copy the character onto my clip board; Copy the character into processing. What do I have to do to make processing accept characters under dead keys? Disabling dead keys is not an option.

    Read the article

  • Node Serialization in NetBeans Platform 7.0

    - by Geertjan
    Node serialization makes sense when you're not interested in the data (since that should be serialized to a database), but in the state of the application. For example, when the application restarts, you want the last selected node to automatically be selected again. That's not the kind of information you'll want to store in a database, hence node serialization is not about data serialization but about application state serialization. I've written about this topic in October 2008, here and here, but want to show how to do this again, using NetBeans Platform 7.0. Somewhere I remember reading that this can't be done anymore and that's typically the best motivation for me, i.e., to prove that it can be done after all. Anyway, in a standard POJO/Node/BeanTreeView scenario, do the following: Remove the "@ConvertAsProperties" annotation at the top of the class, which you'll find there if you used the Window Component wizard. We're not going to use property-file based serialization, but plain old java.io.Serializable  instead. In the TopComponent, assuming it is named "UserExplorerTopComponent", typically at the end of the file, add the following: @Override public Object writeReplace() { //We want to work with one selected item only //and thanks to BeanTreeView.setSelectionMode, //only one node can be selected anyway: Handle handle = NodeOp.toHandles(em.getSelectedNodes())[0]; return new ResolvableHelper(handle); } public final static class ResolvableHelper implements Serializable { private static final long serialVersionUID = 1L; public Handle selectedHandle; private ResolvableHelper(Handle selectedHandle) { this.selectedHandle = selectedHandle; } public Object readResolve() { WindowManager.getDefault().invokeWhenUIReady(new Runnable() { @Override public void run() { try { //Get the TopComponent: UserExplorerTopComponent tc = (UserExplorerTopComponent) WindowManager.getDefault().findTopComponent("UserExplorerTopComponent"); //Get the display text to search for: String selectedDisplayName = selectedHandle.getNode().getDisplayName(); //Get the root, which is the parent of the node we want: Node root = tc.getExplorerManager().getRootContext(); //Find the node, by passing in the root with the display text: Node selectedNode = NodeOp.findPath(root, new String[]{selectedDisplayName}); //Set the explorer manager's selected node: tc.getExplorerManager().setSelectedNodes(new Node[]{selectedNode}); } catch (PropertyVetoException ex) { Exceptions.printStackTrace(ex); } catch (IOException ex) { Exceptions.printStackTrace(ex); } } }); return null; } } Assuming you have a node named "UserNode" for a type named "User" containing a property named "type", add the bits in bold below to your "UserNode": public class UserNode extends AbstractNode implements Serializable { static final long serialVersionUID = 1L; public UserNode(User key) { super(Children.LEAF); setName(key.getType()); } @Override public Handle getHandle() { return new CustomHandle(this, getName()); } public class CustomHandle implements Node.Handle { static final long serialVersionUID = 1L; private AbstractNode node = null; private final String searchString; public CustomHandle(AbstractNode node, String searchString) { this.node = node; this.searchString = searchString; } @Override public Node getNode() { node.setName(searchString); return node; } } } Run the application and select one of the user nodes. Close the application. Start it up again. The user node is not automatically selected, in fact, the window does not open, and you will see this in the output: Caused: java.io.InvalidClassException: org.serialization.sample.UserNode; no valid constructor Read this article and then you'll understand the need for this class: public class BaseNode extends AbstractNode { public BaseNode() { super(Children.LEAF); } public BaseNode(Children kids) { super(kids); } public BaseNode(Children kids, Lookup lkp) { super(kids, lkp); } } Now, instead of extending AbstractNode in your UserNode, extend BaseNode. Then the first non-serializable superclass of the UserNode has an explicitly declared no-args constructor, Do the same as the above for each node in the hierarchy that needs to be serialized. If you have multiple nodes needing serialization, you can share the "CustomHandle" inner class above between all the other nodes, while all the other nodes will also need to extend BaseNode (or provide their own non-serializable super class that explicitly declares a no-args constructor). Now, when I run the application, I select a node, then I close the application, restart it, and the previously selected node is automatically selected when the application has restarted.

    Read the article

  • Find the latest file by modified date

    - by Rich
    If I want to find the latest file (mtime) in a (big) directory containing subdirectories, how would I do it? Lots of posts I've found suggest some variation of ls -lt | head (amusingly, many suggest ls -ltr | tail which is the same but less efficient) which is fine unless you have subdirectories (I do). Then again, you could find . -type f -exec ls -lt \{\} \+ | head which will definitely do the trick for as many files as can be specified by one command, i.e. if you have a big directory, -exec...\+ will issue separate commands; therefore each group will be sorted by ls within itself but not over the total set; the head will therefore pick up the lastest entry of the first batch. Any answers?

    Read the article

  • .mdf Database Filetype

    - by James Izzard
    Would somebody be kind enough to correct my understanding of the following (if incorrect)? Microsoft's .mdf file-type can be used by both the LocalDB and the full Server database engines (apologies if engine is not the correct word?). The .mdf file does not care which of these two options are accessing it - so you could use either to access any given .mdf file, provided you had permissions and password etc. The LocalDB and the SQL Server are two options that can be interchangeably chosen to access .mdf files depending on the application requirements. Appreciate any clarification. Thanks

    Read the article

  • Android Emulator - Ubuntu 12.04

    - by Aneesh karthik C
    When I type in the command @emulator Andreud where 'Andreud' is the name of the emulator I created. It gives the following errors and a blank screen in which I should get android home screen, icons, etc shows up. PVRDRIInitPVR2D: PVR2D device index (0)Failed to load libGL.so error libGL.so: cannot open shared object file: No such file or directory Failed to load libGL.so error libGL.so: cannot open shared object file: No such file or directory As per a comment I tried installing ia32-libs aneesh@nb14:~$ sudo apt-get install ia32-libs Reading package lists... Done Building dependency tree Reading state information... Done Package ia32-libs is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'ia32-libs' has no installation candidate I want the home screen to appear. Any help is greatly appreciated.

    Read the article

  • SAB BizTalk Archiving Pipeline Component v0.2

    - by Stuart Brierley
    Just released to Codeplex is an updated version of my archiving pipeline component for BizTalk. The changes in this release are: Addition of FTP adapter macros to the base macros and File adapter macros. Fix for the issue of garbage collection of data streams within pipelines as discussed in this previous blog entry. Now looks for OutboundTransportType in addition to InboundTransportType to pick up send port transport type; Therefore changed %InboundTransportType% macro to %TransportType%. An initial outline of the project can be read here.

    Read the article

  • Programmatically adding metatags to Masterpages in Web Forms and MVC [migrated]

    - by Mike Clarke
    Has anyone got any ideas on where we should be adding our SEO metadata in a Web Forms project? Why I ask is that currently I have two main Masterpages, one is the home page and the other one is every other page, they both have the same metadata that’s just a generic description and set of key words about my site. The problem with that is search engines are only picking up my home page and my own search engine displays the same title and description for all results. The other problem is the site is predominantly dynamic so if you type in a search for beach which is a major categories on the site you get 10000 results that all go to different information but all look like a link to the same page to the user. edit It's live here www.themorningtonpeninsula.com

    Read the article

< Previous Page | 756 757 758 759 760 761 762 763 764 765 766 767  | Next Page >