Search Results

Search found 24931 results on 998 pages for 'information visualization'.

Page 502/998 | < Previous Page | 498 499 500 501 502 503 504 505 506 507 508 509  | Next Page >

  • Decal implementation

    - by dreta
    I had issues finding information about decals, so maybe this question will help others. The implementation is for a forward renderer. Could somebody confirm if i got decal implementation right? You define a cube of any dimension that'll define the projection volume in common space. You check for triangle intersection with the defined cube to recieve triangles that the projection will affect. You clip these triangles and save them. You then use matrix tricks to calculate UV coordinates for the saved triangles that'll reference the texture you're projecting. To do this you take the vectors representing height, width and depth of the cube in common space, so that f.e. the bottom left corner is the origin. You put that in a matrix as the i, j, k unit vectors, set the translation for the cube, then you inverse this matrix. You multiply the vertices of the saved triangles by this matrix, that way you get their coordinates inside of a 0 to 1 size cube that you use as the UV coordinates. This way you have the original triangles you're projecting onto and you have UV coordinates for them (the UV coordinates are referencing the texture you're projecting). Then you rerender the saved triangles onto the scene and they overwrite the area of projection with the projected image. Now the questions that i couldn't find answers for. Is the last point right? I've never done software clipping, but it seems error prone enough, due to limited precision, that the'll be some z fighting occuring for the projected texture. Also is the way of getting UV coordinates correct?

    Read the article

  • Deploying Fusion Order Demo on 11.1.1.6 by Antony Reynolds

    - by JuergenKress
    Do you need to build a demo for a customer? Why not to use Fusion Order Demo (FOD) and modify it to do some extra things. Great idea, let me install it on one of my Linux servers I said "Turns out there are a few gotchas, so here is how I installed it on a Linux server with JDeveloper on my Windows desktop." Task 1: Install Oracle JDeveloper Studio I already had JDeveloper 11.1.1.6 with SOA extensions installed so this was easy. Task 2: Install the Fusion Order Demo Application First thing to do is to obtain the latest version of the demo from OTN, I obtained the R1 PS5 release. Gotcha #1 – my winzip wouldn’t unzip the file, I had to use 7-Zip. Task 3: Install Oracle SOA Suite On the domain modify the setDomainEnv script by adding “-Djps.app.credential.overwrite.allowed=true” to JAVA_PROPERTIES and restarting the Admin Server. Also set the JAVA_HOME variable and add Ant to the path. I created a domain with separate SOA and BAM servers and also set up the Node Manager to make it easier to stop and start components. Read the full blog post by Antony. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: Fusion Order Demo on 11.1.1.6,Antony Reynolds,SOA Community,Oracle SOA,Oracle BPM,BPM,Community,OPN,Jürgen Kress

    Read the article

  • Implications and benefits of removing NT AUTHORITY\SYSTEM from sysadmin role?

    - by Cade Roux
    Disclaimer: I am not a DBA. I am a database developer. A DBA just sent a report to our data stewards and is planning to remove the NT AUTHORITY\SYSTEM account from the sysadmin role on a bunch of servers. (The probably violate some audit report they received). I see a MSKB article that says not to do this. From what I can tell reading a variety of disparate information on the web, a bunch of special services/operations (Volume Copy, Full Text Indexing, MOM, Windows Update) use this account even when the SQL Server and Agent service etc are all running under dedicated accounts.

    Read the article

  • Relaying requests between third party server and Heroku for static IP

    - by Gady
    I have a rails application hosted on Heroku that I need to integrate with 3rd party payments provider. The payment provider requires that my application will have a static IP for incoming and outgoing HTTPS requests. I want to deploy a proxy on a Linode VPS so it can relay the information as a proxy. Relaying the request to the service provider seems easy, I just use their IP. Can I relay requests coming from the service provider to the heroku application? Can I realy the request using a URL (https://myapp.herokuapp.com) ? What is the recommended proxy server to use?

    Read the article

  • Microsoft Sync Framework

    - by kaleidoscope
    Introduction It is a platform that enables collaboration and offline access for applications, services and devices. Sync framework features technologies and tools that enable roaming, data sharing and taking data offline. Moreover, developers can build synchronization ecosystems that integrate any application with data from any store, by using any protocol over any network. Highlights * Add sync support to new and existing applications, services and devices * Enable collaboration and offline capabilities for any application * Roam and share information form any data store, over any protocol and over any network configuration * Leverage sync capabilities exposed in Microsoft technologies to create sync ecosystems * Extend the architecture to support custom data types including files Benefits of using Sync Framework * An extensible model that lets you integrate multiple data sources into a synchronization ecosystems. * A managed API for all components and a native API for select components. * Conflict handling for automatic and custom resolution schemes. * Filters that let you synchronize a subset of data, such as only those files that contain images. * A compact and efficient metadata model that enables synchronization for virtually any participant, without significant changes to the data store: - Any data store     Add synchronization to a wide range of applications, services and devices. - Any data type     Introduce new data types to synchronize. - Any protocol     Use existing architectures and protocols to synchronize data. The transport – agnostic architecture allows integration of synchronization into a variety of protocols, including over-the-air and embedded devices. - Any network configuration     Enable synchronization for your applications, devices and services in true peer-to-peer or hub-and-spoke configurations. Easily recover from network interruptions. Reduce network traffic by efficiently selecting changes to synchronize. Technorati Tags: Anish Sharma,Microsoft Sync Framework

    Read the article

  • Run application or script on Windows RDC connection

    - by NickLarsen
    I checked this thread, but it did not solve my exact problem. I need to run a script on when a connection is made across my network using windows remote desktop connection. The thread listed above works for the initial login, however, if I don't log out (which is necessary for some processes running on my network), then it wont run the script again the next time someone connects to the system using remote desktop connection. Previously we were using pcAnywhere to achieve this, however after running into some graphical issues with pcAnywhere, we have decided to move away from it to RDC. For a little more information, we need to have an email sent out anytime a connection is made to particular machines. The login name will always be the same for those systems and we do not log off when closing the connection.

    Read the article

  • Ask the Readers: Which Google Services Do You Use?

    - by Asian Angel
    Nearly everyone uses at least one of Google’s services while browsing each day. What we want to know this week is which Google services do you use? Image by adria.richards Google offers a multitude of services such as e-mail, calendar, and docs to help you manage your online life. Some of you may only use a few of the available services while others are power users. A fair number of businesses and schools have also switched over to Google apps and services for their organization. Whether it is at home, work, or both Google has become a part of our daily lives. Being able to access everything in one place can be extremely useful but equally frustrating if Google’s services experience any downtime. Another concern for some people is the issue of privacy over having so much information stored by a single company. Ultimately the final decision lies with you. Which Google services do you use at home or at work? Let us know in the comments! Similar Articles Productive Geek Tips Access Your Favorite Google Services in Chrome the Easy WayFinding RSS Subscriber Counts Through Apache LogsQuick and Easy Access to Your Favorite Google Services with GButtsAsk the Readers: Which Search Engine Do You Use?A Few Things I’ve Learned from Writing at How-To Geek TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Identify Fonts using WhatFontis.com Windows 7’s WordPad is Actually Good Greate Image Viewing and Management with Zoner Photo Studio Free Windows Media Player Plus! – Cool WMP Enhancer Get Your Team’s World Cup Schedule In Google Calendar Backup Drivers With Driver Magician

    Read the article

  • New PeopleSoft Applications Search

    - by Matthew Haavisto
    As you may have seen from the PeopleTools 8.52 Release Value Proposition , PeopleTools intends to introduce a new search capability in release 8.52. We believe this feature will not only improve the ability of users to find content, but will fundamentally change the way people navigate around the PeopleSoft ecosystem. PeopleSoft applications will be delivering this new search in coming releases and feature packs. PeopleSoft Application Search is actually a framework—a group of features that provides an improved means of searching for a variety of content across PeopleSoft applications. From a user experience perspective, the new search offers a powerful, keyword-based search presented in a familiar, intuitive user experience. Rather than browsing through long menu hierarchies to find a page, data item, or transaction, users can use PeopleSoft Application Search to directly navigate to desired locations. We envision this to be similar to how people navigate across the internet. This capability may reduce or even eliminate the need to navigate PeopleSoft applications using the existing application menu system (though menus will still be available to people that prefer that method). The new search will be available at any point in an application and can be configured to span multiple PeopleSoft applications. It enables users to initiate transactions or navigate to key information without using the PeopleSoft application menus. In addition, filters and facets will enable people to narrow their search results sets, making it easier to identify and navigate to desired application content. Action menus are embedded directly in the search results, allowing users to navigate straight to specific related transactions – pre-populated with the selected search results data. PeopleSoft Applications Search framework uses Oracle’s Secure Enterprise Search as its search engine. Most Customers will benefit from the new search when it is delivered with applications. However, customers can start deploying it after a Tools-only upgrade. In this case, however, customers would have to create their own indices and implement security.

    Read the article

  • It's The End of Work as We Know It, But I Feel Fine

    - by Naresh Persaud
    If you are attending Open World this year, don't miss Amit Jasuja's session on trends in Identity Management. This session will take place on Monday October 1st in Moscone West at 10:45. You can join the conversation on Twitter as Amit Jasuja discusses the trends that are shaping Identity Management as a market and how Oracle is responding to these secular trends. Use hashtag OracleIDM. In addition, here’s a list of the sessions in the  Identity Management  track. In Amit's session, he will discuss how the workplace is changing. The pace of technology is accelerating and work is no longer a place but rather an activity. We are behaving socially in our professional lives and our professional responsibilities are encroaching on our social lives.  The net result is that we will need to change the way we work and collaborate. Work is anytime and anywhere. This impacts the dynamics of teams and how they access information and applications. Our teams span multiple organizations and "the new work order" means enabling the interaction and securing the experience. It is the end of work as we know it both economically and technologically. Join Amit for this session and you will feel much better about the changing workplace. 

    Read the article

  • Canon Pixma 432 network scanner

    - by Donald Cutler
    I have a problem with a Canon Pixma MX432 Printer/Scanner. I just removed Windows 7 and installed Ubuntu 14.04 8/17/2014 on an older desktop that I built (The computer is an AMD build with an ASUS motherboard). The printer/scanner is an all-in-one unit that is networked in my house via WiFi. All of the computers in my house can access this printer/scanner. My Macbook, my wife's Windows 8 laptop, and my kids mini iPads. I am giving Linux a test-drive with some success as far as setting devices up. But, for the life of me I cannot figure out my scanner issue. If anyone can help I would appreciate it. Make/Model: Canon Pixma MX432 PPD Driver: I have no idea how to get this info. Supported?: no, from the information I gather from old forum posts. Works?: the printer works via WiFi perfectly, but not the scanner. The Simple Scan program sees the scanner, but produces an error when I attempt to scan. I also tried XSANE, but that program does not even detect the scanner. NOTE: THE PRINTER IS WORKING OFF OF AN UBUNTU DRIVER AND NOT A CANON DRIVER. Linux Version: Ubuntu 14.04 I tried the steps in this post, downloaded the "scangearmp-mx430series-1.90-1-deb.tar.gz" file, but could not get the scanner to work. http://ubuntuforums.org/archive/index.php/t-2096430.html any suggestions?

    Read the article

  • Getting FC11 to run under VMware Server, converted from physical machine

    - by Kristian
    I have a FC11 installation that I have converted to a VMware disk image to run om my VMware Server. I converted it with qemu-img, as the VMware Converter software apparently only converts Linux hosts to VMware Infrastructure servers. The disk image boots fine (grub is loaded and boots the kernel) but it seems like the disk is not found by the kernel, and the boot process stalls. Hotplugging USB devices work (the kernel prints debug information) and I'm able to press keys (Ctrl-Alt-Delete for instance). The VMware guest OS is set to RedHat Enterprise Linux 5 (32 bit), and I have tried both the LSI Logic, LSI Logic SAS and VMware Accelerated SCSI SCSI controllers, to no avail. I'm able to boot an installer disk and get into rescue mode and mount the filesystem, so my question is, what do I need to do to the guest kernel / initrd image to make it recognize the virtual disk?

    Read the article

  • Guide to the web development ecosystem

    - by acjohnson55
    I'm a long-time software developer, and I've been thrown in the deep, deep end of developing from the ground up what will hopefully be a highly scalable and interactive web application. I've been out of the web game for about 8 years, and even when I was last in it, I wasn't exactly on the cutting edge. I think I've made judicious design decisions and I'm quite happy with the progress I've been making so far, but new, hot web technologies keep crawling out of the woodwork and into my headspace, forcing me to continually revalidate my implementation decisions. Complicating things even further is the preponderance of out-of-date information and the difficulty of knowing what is out of date in the first place. What I'm wondering is, are there any comprehensive books or guides dedicated to compiling and comparing the technologies out there, end-to-end in the web application stack? I'm happy to learn new techs on demand, but I don't like learning about them after I've already spent time going in another direction. I'm looking for the sort of executive info a CTO might read to make sure the best architectural decisions are being made. And just to be clear, this is a question about resources, not about specific technology suggestions.

    Read the article

  • Viewing a large field in a query in SQL management studio with ZOOM?

    - by smithym
    Hi there, Can anyone help? I am using SQL management studio (sql server 2008) to run queries and some of the fields that come back are varchar(max) for example and it has a lot of information - Is there a zoom feature to open the window and show me the contents with vertical and horizontal scrollbars? I remember there was, i thought it was F2 but i must have been mistaken as it doesn't work Now i have to scroll horizontal on the field and its really difficult to see everything Also some of the fields contain new line codes etc so it would be great if the zoom feature would display the info using the new line codes etc Any body know how to do this?

    Read the article

  • Where'd My Data Go? (and/or...How Do I Get Rid of It?)

    - by David Paquette
    Want to get a better idea of how cascade deletes work in Entity Framework Code First scenarios? Want to see it in action? Stick with us as we quickly demystify what happens when you tell your data context to nuke a parent entity. This post is authored by Calgary .NET User Group Leader David Paquette with help from Microsoft MVP in Asp.Net James Chambers. We got to spend a great week back in March at Prairie Dev Con West, chalk full of sessions, presentations, workshops, conversations and, of course, questions.  One of the questions that came up during my session: "How does Entity Framework Code First deal with cascading deletes?". James and I had different thoughts on what the default was, if it was different from SQL server, if it was the same as EF proper and if there was a way to override whatever the default was.  So we built a set of examples and figured out that the answer is simple: it depends.  (Download Samples) Consider the example of a hockey league. You have several different entities in the league including games, teams that play the games and players that make up the teams. Each team also has a mascot.  If you delete a team, we need a couple of things to happen: The team, games and mascot will be deleted, and The players for that team will remain in the league (and therefore the database) but they should no longer be assigned to a team. So, let's make this start to come together with a look at the default behaviour in SQL when using an EDMX-driven project. The Reference – Understanding EF's Behaviour with an EDMX/DB First Approach First up let’s take a look at the DB first approach.  In the database, we defined 4 tables: Teams, Players, Mascots, and Games.  We also defined 4 foreign keys as follows: Players.Team_Id (NULL) –> Teams.Id Mascots.Id (NOT NULL) –> Teams.Id (ON DELETE CASCADE) Games.HomeTeam_Id (NOT NULL) –> Teams.Id Games.AwayTeam_Id (NOT NULL) –> Teams.Id Note that by specifying ON DELETE CASCADE for the Mascots –> Teams foreign key, the database will automatically delete the team’s mascot when the team is deleted.  While we want the same behaviour for the Games –> Teams foreign keys, it is not possible to accomplish this using ON DELETE CASCADE in SQL Server.  Specifying a ON DELETE CASCADE on these foreign keys would cause a circular reference error: The series of cascading referential actions triggered by a single DELETE or UPDATE must form a tree that contains no circular references. No table can appear more than one time in the list of all cascading referential actions that result from the DELETE or UPDATE – MSDN When we create an entity data model from the above database, we get the following:   In order to get the Games to be deleted when the Team is deleted, we need to specify End1 OnDelete action of Cascade for the HomeGames and AwayGames associations.   Now, we have an Entity Data Model that accomplishes what we set out to do.  One caveat here is that Entity Framework will only properly handle the cascading delete when the the players and games for the team have been loaded into memory.  For a more detailed look at Cascade Delete in EF Database First, take a look at this blog post by Alex James.   Building The Same Sample with EF Code First Next, we're going to build up the model with the code first approach.  EF Code First is defined on the Ado.Net team blog as such: Code First allows you to define your model using C# or VB.Net classes, optionally additional configuration can be performed using attributes on your classes and properties or by using a Fluent API. Your model can be used to generate a database schema or to map to an existing database. Entity Framework Code First follows some conventions to determine when to cascade delete on a relationship.  More details can be found on MSDN: If a foreign key on the dependent entity is not nullable, then Code First sets cascade delete on the relationship. If a foreign key on the dependent entity is nullable, Code First does not set cascade delete on the relationship, and when the principal is deleted the foreign key will be set to null. The multiplicity and cascade delete behavior detected by convention can be overridden by using the fluent API. For more information, see Configuring Relationships with Fluent API (Code First). Our DbContext consists of 4 DbSets: public DbSet<Team> Teams { get; set; } public DbSet<Player> Players { get; set; } public DbSet<Mascot> Mascots { get; set; } public DbSet<Game> Games { get; set; } When we set the Mascot –> Team relationship to required, Entity Framework will automatically delete the Mascot when the Team is deleted.  This can be done either using the [Required] data annotation attribute, or by overriding the OnModelCreating method of your DbContext and using the fluent API. Data Annotations: public class Mascot { public int Id { get; set; } public string Name { get; set; } [Required] public virtual Team Team { get; set; } } Fluent API: protected override void OnModelCreating(DbModelBuilder modelBuilder) { modelBuilder.Entity<Mascot>().HasRequired(m => m.Team); } The Player –> Team relationship is automatically handled by the Code First conventions. When a Team is deleted, the Team property for all the players on that team will be set to null.  No additional configuration is required, however all the Player entities must be loaded into memory for the cascading to work properly. The Game –> Team relationship causes some grief in our Code First example.  If we try setting the HomeTeam and AwayTeam relationships to required, Entity Framework will attempt to set On Cascade Delete for the HomeTeam and AwayTeam foreign keys when creating the database tables.  As we saw in the database first example, this causes a circular reference error and throws the following SqlException: Introducing FOREIGN KEY constraint 'FK_Games_Teams_AwayTeam_Id' on table 'Games' may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints. Could not create constraint. To solve this problem, we need to disable the default cascade delete behaviour using the fluent API: protected override void OnModelCreating(DbModelBuilder modelBuilder) { modelBuilder.Entity<Mascot>().HasRequired(m => m.Team); modelBuilder.Entity<Team>() .HasMany(t => t.HomeGames) .WithRequired(g => g.HomeTeam) .WillCascadeOnDelete(false); modelBuilder.Entity<Team>() .HasMany(t => t.AwayGames) .WithRequired(g => g.AwayTeam) .WillCascadeOnDelete(false); base.OnModelCreating(modelBuilder); } Unfortunately, this means we need to manually manage the cascade delete behaviour.  When a Team is deleted, we need to manually delete all the home and away Games for that Team. foreach (Game awayGame in jets.AwayGames.ToArray()) { entities.Games.Remove(awayGame); } foreach (Game homeGame in homeGames) { entities.Games.Remove(homeGame); } entities.Teams.Remove(jets); entities.SaveChanges();   Overriding the Defaults – When and How To As you have seen, the default behaviour of Entity Framework Code First can be overridden using the fluent API.  This can be done by overriding the OnModelCreating method of your DbContext, or by creating separate model override files for each entity.  More information is available on MSDN.   Going Further These were simple examples but they helped us illustrate a couple of points. First of all, we were able to demonstrate the default behaviour of Entity Framework when dealing with cascading deletes, specifically how entity relationships affect the outcome. Secondly, we showed you how to modify the code and control the behaviour to get the outcome you're looking for. Finally, we showed you how easy it is to explore this kind of thing, and we're hoping that you get a chance to experiment even further. For example, did you know that: Entity Framework Code First also works seamlessly with SQL Azure (MSDN) Database creation defaults can be overridden using a variety of IDatabaseInitializers  (Understanding Database Initializers) You can use Code Based migrations to manage database upgrades as your model continues to evolve (MSDN) Next Steps There's no time like the present to start the learning, so here's what you need to do: Get up-to-date in Visual Studio 2010 (VS2010 | SP1) or Visual Studio 2012 (VS2012) Build yourself a project to try these concepts out (or download the sample project) Get into the community and ask questions! There are a ton of great resources out there and community members willing to help you out (like these two guys!). Good luck! About the Authors David Paquette works as a lead developer at P2 Energy Solutions in Calgary, Alberta where he builds commercial software products for the energy industry.  Outside of work, David enjoys outdoor camping, fishing, and skiing. David is also active in the software community giving presentations both locally and at conferences. David also serves as the President of Calgary .Net User Group. James Chambers crafts software awesomeness with an incredible team at LogiSense Corp, based in Cambridge, Ontario. A husband, father and humanitarian, he is currently residing in the province of Manitoba where he resists the urge to cheer for the Jets and maintains he allegiance to the Calgary Flames. When he's not active with the family, outdoors or volunteering, you can find James speaking at conferences and user groups across the country about web development and related technologies.

    Read the article

  • Adding and accessing custom sections in your C# App.config

    - by deadlydog
    So I recently thought I’d try using the app.config file to specify some data for my application (such as URLs) rather than hard-coding it into my app, which would require a recompile and redeploy of my app if one of our URLs changed.  By using the app.config it allows a user to just open up the .config file that sits beside their .exe file and edit the URLs right there and then re-run the app; no recompiling, no redeployment necessary. I spent a good few hours fighting with the app.config and looking at examples on Google before I was able to get things to work properly.  Most of the examples I found showed you how to pull a value from the app.config if you knew the specific key of the element you wanted to retrieve, but it took me a while to find a way to simply loop through all elements in a section, so I thought I would share my solutions here.   Simple and Easy The easiest way to use the app.config is to use the built-in types, such as NameValueSectionHandler.  For example, if we just wanted to add a list of database server urls to use in my app, we could do this in the app.config file like so: 1: <?xml version="1.0" encoding="utf-8" ?> 2: <configuration> 3: <configSections> 4: <section name="ConnectionManagerDatabaseServers" type="System.Configuration.NameValueSectionHandler" /> 5: </configSections> 6: <startup> 7: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" /> 8: </startup> 9: <ConnectionManagerDatabaseServers> 10: <add key="localhost" value="localhost" /> 11: <add key="Dev" value="Dev.MyDomain.local" /> 12: <add key="Test" value="Test.MyDomain.local" /> 13: <add key="Live" value="Prod.MyDomain.com" /> 14: </ConnectionManagerDatabaseServers> 15: </configuration>   And then you can access these values in code like so: 1: string devUrl = string.Empty; 2: var connectionManagerDatabaseServers = ConfigurationManager.GetSection("ConnectionManagerDatabaseServers") as NameValueCollection; 3: if (connectionManagerDatabaseServers != null) 4: { 5: devUrl = connectionManagerDatabaseServers["Dev"].ToString(); 6: }   Sometimes though you don’t know what the keys are going to be and you just want to grab all of the values in that ConnectionManagerDatabaseServers section.  In that case you can get them all like this: 1: // Grab the Environments listed in the App.config and add them to our list. 2: var connectionManagerDatabaseServers = ConfigurationManager.GetSection("ConnectionManagerDatabaseServers") as NameValueCollection; 3: if (connectionManagerDatabaseServers != null) 4: { 5: foreach (var serverKey in connectionManagerDatabaseServers.AllKeys) 6: { 7: string serverValue = connectionManagerDatabaseServers.GetValues(serverKey).FirstOrDefault(); 8: AddDatabaseServer(serverValue); 9: } 10: }   And here we just assume that the AddDatabaseServer() function adds the given string to some list of strings.  So this works great, but what about when we want to bring in more values than just a single string (or technically you could use this to bring in 2 strings, where the “key” could be the other string you want to store; for example, we could have stored the value of the Key as the user-friendly name of the url).   More Advanced (and more complicated) So if you want to bring in more information than a string or two per object in the section, then you can no longer simply use the built-in System.Configuration.NameValueSectionHandler type provided for us.  Instead you have to build your own types.  Here let’s assume that we again want to configure a set of addresses (i.e. urls), but we want to specify some extra info with them, such as the user-friendly name, if they require SSL or not, and a list of security groups that are allowed to save changes made to these endpoints. So let’s start by looking at the app.config: 1: <?xml version="1.0" encoding="utf-8" ?> 2: <configuration> 3: <configSections> 4: <section name="ConnectionManagerDataSection" type="ConnectionManagerUpdater.Data.Configuration.ConnectionManagerDataSection, ConnectionManagerUpdater" /> 5: </configSections> 6: <startup> 7: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" /> 8: </startup> 9: <ConnectionManagerDataSection> 10: <ConnectionManagerEndpoints> 11: <add name="Development" address="Dev.MyDomain.local" useSSL="false" /> 12: <add name="Test" address="Test.MyDomain.local" useSSL="true" /> 13: <add name="Live" address="Prod.MyDomain.com" useSSL="true" securityGroupsAllowedToSaveChanges="ConnectionManagerUsers" /> 14: </ConnectionManagerEndpoints> 15: </ConnectionManagerDataSection> 16: </configuration>   The first thing to notice here is that my section is now using the type “ConnectionManagerUpdater.Data.Configuration.ConnectionManagerDataSection” (the fully qualified path to my new class I created) “, ConnectionManagerUpdater” (the name of the assembly my new class is in).  Next, you will also notice an extra layer down in the <ConnectionManagerDataSection> which is the <ConnectionManagerEndpoints> element.  This is a new collection class that I created to hold each of the Endpoint entries that are defined.  Let’s look at that code now: 1: using System; 2: using System.Collections.Generic; 3: using System.Configuration; 4: using System.Linq; 5: using System.Text; 6: using System.Threading.Tasks; 7:  8: namespace ConnectionManagerUpdater.Data.Configuration 9: { 10: public class ConnectionManagerDataSection : ConfigurationSection 11: { 12: /// <summary> 13: /// The name of this section in the app.config. 14: /// </summary> 15: public const string SectionName = "ConnectionManagerDataSection"; 16: 17: private const string EndpointCollectionName = "ConnectionManagerEndpoints"; 18:  19: [ConfigurationProperty(EndpointCollectionName)] 20: [ConfigurationCollection(typeof(ConnectionManagerEndpointsCollection), AddItemName = "add")] 21: public ConnectionManagerEndpointsCollection ConnectionManagerEndpoints { get { return (ConnectionManagerEndpointsCollection)base[EndpointCollectionName]; } } 22: } 23:  24: public class ConnectionManagerEndpointsCollection : ConfigurationElementCollection 25: { 26: protected override ConfigurationElement CreateNewElement() 27: { 28: return new ConnectionManagerEndpointElement(); 29: } 30: 31: protected override object GetElementKey(ConfigurationElement element) 32: { 33: return ((ConnectionManagerEndpointElement)element).Name; 34: } 35: } 36: 37: public class ConnectionManagerEndpointElement : ConfigurationElement 38: { 39: [ConfigurationProperty("name", IsRequired = true)] 40: public string Name 41: { 42: get { return (string)this["name"]; } 43: set { this["name"] = value; } 44: } 45: 46: [ConfigurationProperty("address", IsRequired = true)] 47: public string Address 48: { 49: get { return (string)this["address"]; } 50: set { this["address"] = value; } 51: } 52: 53: [ConfigurationProperty("useSSL", IsRequired = false, DefaultValue = false)] 54: public bool UseSSL 55: { 56: get { return (bool)this["useSSL"]; } 57: set { this["useSSL"] = value; } 58: } 59: 60: [ConfigurationProperty("securityGroupsAllowedToSaveChanges", IsRequired = false)] 61: public string SecurityGroupsAllowedToSaveChanges 62: { 63: get { return (string)this["securityGroupsAllowedToSaveChanges"]; } 64: set { this["securityGroupsAllowedToSaveChanges"] = value; } 65: } 66: } 67: }   So here the first class we declare is the one that appears in the <configSections> element of the app.config.  It is ConnectionManagerDataSection and it inherits from the necessary System.Configuration.ConfigurationSection class.  This class just has one property (other than the expected section name), that basically just says I have a Collection property, which is actually a ConnectionManagerEndpointsCollection, which is the next class defined.  The ConnectionManagerEndpointsCollection class inherits from ConfigurationElementCollection and overrides the requied fields.  The first tells it what type of Element to create when adding a new one (in our case a ConnectionManagerEndpointElement), and a function specifying what property on our ConnectionManagerEndpointElement class is the unique key, which I’ve specified to be the Name field. The last class defined is the actual meat of our elements.  It inherits from ConfigurationElement and specifies the properties of the element (which can then be set in the xml of the App.config).  The “ConfigurationProperty” attribute on each of the properties tells what we expect the name of the property to correspond to in each element in the app.config, as well as some additional information such as if that property is required and what it’s default value should be. Finally, the code to actually access these values would look like this: 1: // Grab the Environments listed in the App.config and add them to our list. 2: var connectionManagerDataSection = ConfigurationManager.GetSection(ConnectionManagerDataSection.SectionName) as ConnectionManagerDataSection; 3: if (connectionManagerDataSection != null) 4: { 5: foreach (ConnectionManagerEndpointElement endpointElement in connectionManagerDataSection.ConnectionManagerEndpoints) 6: { 7: var endpoint = new ConnectionManagerEndpoint() { Name = endpointElement.Name, ServerInfo = new ConnectionManagerServerInfo() { Address = endpointElement.Address, UseSSL = endpointElement.UseSSL, SecurityGroupsAllowedToSaveChanges = endpointElement.SecurityGroupsAllowedToSaveChanges.Split(',').Where(e => !string.IsNullOrWhiteSpace(e)).ToList() } }; 8: AddEndpoint(endpoint); 9: } 10: } This looks very similar to what we had before in the “simple” example.  The main points of interest are that we cast the section as ConnectionManagerDataSection (which is the class we defined for our section) and then iterate over the endpoints collection using the ConnectionManagerEndpoints property we created in the ConnectionManagerDataSection class.   Also, some other helpful resources around using app.config that I found (and for parts that I didn’t really explain in this article) are: How do you use sections in C# 4.0 app.config? (Stack Overflow) <== Shows how to use Section Groups as well, which is something that I did not cover here, but might be of interest to you. How to: Create Custom Configuration Sections Using Configuration Section (MSDN) ConfigurationSection Class (MSDN) ConfigurationCollectionAttribute Class (MSDN) ConfigurationElementCollection Class (MSDN)   I hope you find this helpful.  Feel free to leave a comment.  Happy Coding!

    Read the article

  • update manager in ubuntu 12.10 can't access repository, but software center can

    - by user103597
    For some reason, whenever I try to search for updates with Ubuntu 12.10's update manager, I always get this error: Failed to download repository information, followed by the following details: W:Failed to fetch http://extras.ubuntu.com/ubuntu/dists/quantal/Release.gpg Unable to connect to extras.ubuntu.com:http: , W:Failed to fetch http://extras.ubuntu.com/ubuntu/dists/quantal/main/i18n/Translation-en Unable to connect to extras.ubuntu.com:http: , W:Failed to fetch http://extras.ubuntu.com/ubuntu/dists/quantal/main/i18n/Translation-en_CA Unable to connect to extras.ubuntu.com:http: , W:Failed to fetch http://extras.ubuntu.com/ubuntu/dists/quantal/main/source/Sources Unable to connect to extras.ubuntu.com:http: , W:Failed to fetch http://extras.ubuntu.com/ubuntu/dists/quantal/main/binary-amd64/Packages Unable to connect to extras.ubuntu.com:http: , W:Failed to fetch http://extras.ubuntu.com/ubuntu/dists/quantal/main/binary-i386/Packages Unable to connect to extras.ubuntu.com:http: , E:Some index files failed to download. They have been ignored, or old ones used instead. Initially I thought that for whatever reason the repositories were down, so I switched from the Canada server to the main server. I still got the same error. I also tried installing some things from the ubuntu software center. Funny thing is, that worked fine and I was able to successfully download and install software from the software center, so it seems that only update manager can't access the repositories. I have searched for and found similar cases (relating to ubuntu 12.10), but most of those cases involved ppa's, and I don't use any ppa's. Help would be appreciated. Thanks.

    Read the article

  • Exploring MDS Explorer by Mark Nelson

    - by JuergenKress
    Recently, I posted about my colleague Olivier’s MDS Explorer tool, which is a great way to get a look inside your MDS repository. I have been playing around with it a little bit, nothing much really, just some cosmetic stuff, but you might like to take a look at it. I made it format the documents nicely with proper indentation, and with line numbers and a nicer editor. It also will warn you if you are about to open a large document so that you know it has not crashed, but that you just have to be patient. And I added some icons and stuff. There is even a nice Dora the Explorer picture hiding in there for those who care to look for it . Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: MDS Explorerer,IDM,SOA Community,Oracle SOA,Oracle BPM,BPM,Community,OPN,Jürgen Kress,Mark Nelson

    Read the article

  • Online Media Daily: Oracle Takes Social Marketing Seriously

    - by Richard Lefebvre
    In the article published on Nov 12, 2012 and titled "Oracle Integrates Social Marketing Into Enterprise To Gain Marketing Revs," Online Media Daily explores Oracle's approach to social marketing. The publication says that Oracle is focused on showing marketers how to integrate social data into corporate business processes and how to "socialize" the corporate world. The article goes on to state:"Enterprise software companies like Oracle, SAP, IBM, Salesforce and Microsoft have been slowly building up an expertise in social marketing to integrate the data into traditional enterprise resource planning, and customer relationship management tools into social marketing tools.   Enterprise software companies like Oracle, SAP, IBM, Salesforce and Microsoft have been slowly building up an expertise in social marketing to integrate the data into traditional enterprise resource planning, and customer relationship management tools into social marketing tools.    Read more: http://www.mediapost.com/publications/article/187096/oracle-integrates-social-marketing-into-enterprise.html#ixzz2CPMZ1w3D Meg Bear, VP of cloud social platform at Oracle, sees the integration with ERP systems as a differentiator for the company. Oracle Social Relationship Management launched last month. It integrates social data into traditional enterprise applications like Oracle Fusion Marketing, Oracle Fusion Sales Catalog, Oracle ATG Web Commerce and Oracle ERP." The post goes on to quote a Forrester analyst stating the following:""There's room for any process-driven application to run more efficiently, especially if they're socially enabled," said Rob Koplowitz, VP and principal analyst at Forrester Research. "It takes the human part of the process not generally captured today to provide better access to content, information and collective actions." Koplowitz said several acquisitions support Oracle's long-term vision: to layer social on top of other enterprise apps, like its ERP platform." With many great acquisitions under our belt and organically grown social tools, the market recognizes that Oracle is poised to seize the moment in socially enabled business apps. Continue reading the full article here.

    Read the article

  • kubuntu 12.10 will not boot on mac 2.93Ghz intel core 2 duo

    - by Jake Sweet
    I feel like I've tried it all and nothing is changing. I've tried booting from a liveUSB, a liveDVD, and I've checked the mod5 everything matches up. I've even tried different distro's same result on all of them. Just for reference: linuxmint 13kde and Fedora 17. I've also tried changing my liveUSB building software just in case. I've tried unetbootin and Linux USB builder. Both have same results, my opinion is that it is a hardware issue since I'm having near the same result with all of these variables. So now what is actually happening? I can boot up to a screen. I say A screen because some of the ways that dvd's and usb's boot differs. Now on liveusb I'm reaching a black screen with white text. Says booting: done, then below says loading ramdrive: done, then below that it says preparing to boot kernel this may take a while and buckle in or something to that effect. Then nothing. That's it computer freezes. I've waited up to 8 hrs and still nothing. Ok for the liveDVD Everything goes according to instructions per pdf files on every distro, until linux starts. I can only run in compatibility mode. When any other option is tried the computer seems to freeze/stall/be a pain in my butt... Ok well that seems to wrap it up. Also if I'm not explaining something well, I'm sorry I can try to clear anything up. I'm not the best at descriptions. I'm leaving with a tech specs of my mac: 2.93GHz Intel Core 2 Duo, 4 GB ram, NVIDIA GeForce GT 120 graphics, bought in late 09" it's the 24" model, let me know if anymore information will help. Also thanks in advance

    Read the article

  • SQL Saturday #220 - Atlanta - Pre-Conference Scholarships!

    - by Most Valuable Yak (Rob Volk)
    We Want YOU…To Learn! AtlantaMDF and Idera are teaming up to find a few good people. If you are: A student looking to work in the database or business intelligence fields A database professional who is between jobs or wants a better one A developer looking to step up to something new On a limited budget and can’t afford professional SQL Server training Able to attend training from 9 to 5 on May 17, 2013 AtlantaMDF is presenting 5 Pre-Conference Sessions (pre-cons) for SQL Saturday #220! And thanks to Idera’s sponsorship, we can offer one free ticket to each of these sessions to eligible candidates! That means one scholarship per Pre-Con! One Recipient Each will Attend: Denny Cherry: SQL Server Security http://sqlsecurity.eventbrite.com/ Adam Machanic: Surfing the Multicore Wave: Processors, Parallelism, and Performance http://surfmulticore.eventbrite.com/ Stacia Misner: Languages of BI http://languagesofbi.eventbrite.com/ Bill Pearson: Practical Self-Service BI with PowerPivot for Excel http://selfservicebi.eventbrite.com/ Eddie Wuerch: The DBA Skills Upgrade Toolkit http://dbatoolkit.eventbrite.com/ If you are interested in attending these pre-cons send an email by April 30, 2013 to [email protected] and tell us: Why you are a good candidate to receive this scholarship Which sessions you’d like to attend, and why (list multiple sessions in order of preference) What the session will teach you and how it will help you achieve your goals The emails will be evaluated by the good folks at Midlands PASS in Columbia, SC. The recipients will be notified by email and announcements made on May 6, 2013. GOOD LUCK! P.S. - Don't forget that SQLSaturday #220 offers free* training in addition to the pre-cons! You can find more information about SQL Saturday #220 at http://www.sqlsaturday.com/220/eventhome.aspx. View the scheduled sessions at http://www.sqlsaturday.com/220/schedule.aspx and register for them at http://www.sqlsaturday.com/220/register.aspx. * Registration charges a $10 fee to cover lunch expenses.

    Read the article

  • Get Proactive: automatischer Support bietet Vorteile

    - by A&C Redaktion
    „Proaktiv“, das bedeutet soviel wie: handeln statt abwarten, Initiative statt Reaktion. So möchte auch die Aktion „Get Proactive“ für Oracle Premier Support Kunden einen vorausschauenden, offensiven Umgang mit Support-Fällen fördern. Die automatisierte Unterstützung der Systeme, die Oracle Partner und Kunden einen deutlichen Vorsprung vor der Konkurrenz verschaffen kann, umfasst drei Bereiche: Sie heißen Prevent, Resolve und Upgrade. „Prevent“ umfasst alle Maßnahmen der Vorsorge: Deren Ziel ist es, ein mögliches Problem aufzudecken und zu lösen, noch bevor es es sich negativ auswirkt. So können beispielsweise produktbezogene Security Alerts zugeschickt werden, ebenso auf das jeweilige System zugeschnittene Patch-Empfehlungen und Risiko-Warnungen. „Resolve“ steht für den Anspruch, auftretende Probleme schneller und zielgerichtet zu lösen. Notwendig sind dafür die passenden Diagnosetools und -maßnahmen. Spezifische Informationen für individuelle Systeme stehen im Product Information Center zur Verfügung. Zudem helfen Auto-Detect-Werkzeuge dabei, Lösungen für bekannte Probleme zu finden. Wertvolle Hinweise bieten auch die Partner und User in der Online Support Community und natürlich die umfangreiche Wissensbasis in MOS. „Upgrade“ bündelt, wie der Name schon sagt, Schritte zur Risikominimierung durch Unterstützung beim Upgrade. Jeder kann dabei selbst die jeweilige Umgebung auf zertifizierte Produkte prüfen. Tipps und Tricks verrät der Upgrade Advisor mit Best Practices für verschiedenste Produkte, Prozesse und Versionen. Der Patch- und Upgrade-Plan erleichtert die Systemupgrade-Planung. Detaillierte Informationen finden Sie auf den Oracle-Support-Webseiten – geben Sie einfach „Get Proactive“ in die Suchmaske ein.

    Read the article

  • Get Proactive: automatischer Support bietet Vorteile

    - by A&C Redaktion
    „Proaktiv“, das bedeutet soviel wie: handeln statt abwarten, Initiative statt Reaktion. So möchte auch die Aktion „Get Proactive“ für Oracle Premier Support Kunden einen vorausschauenden, offensiven Umgang mit Support-Fällen fördern. Die automatisierte Unterstützung der Systeme, die Oracle Partner und Kunden einen deutlichen Vorsprung vor der Konkurrenz verschaffen kann, umfasst drei Bereiche: Sie heißen Prevent, Resolve und Upgrade. „Prevent“ umfasst alle Maßnahmen der Vorsorge: Deren Ziel ist es, ein mögliches Problem aufzudecken und zu lösen, noch bevor es es sich negativ auswirkt. So können beispielsweise produktbezogene Security Alerts zugeschickt werden, ebenso auf das jeweilige System zugeschnittene Patch-Empfehlungen und Risiko-Warnungen. „Resolve“ steht für den Anspruch, auftretende Probleme schneller und zielgerichtet zu lösen. Notwendig sind dafür die passenden Diagnosetools und -maßnahmen. Spezifische Informationen für individuelle Systeme stehen im Product Information Center zur Verfügung. Zudem helfen Auto-Detect-Werkzeuge dabei, Lösungen für bekannte Probleme zu finden. Wertvolle Hinweise bieten auch die Partner und User in der Online Support Community und natürlich die umfangreiche Wissensbasis in MOS. „Upgrade“ bündelt, wie der Name schon sagt, Schritte zur Risikominimierung durch Unterstützung beim Upgrade. Jeder kann dabei selbst die jeweilige Umgebung auf zertifizierte Produkte prüfen. Tipps und Tricks verrät der Upgrade Advisor mit Best Practices für verschiedenste Produkte, Prozesse und Versionen. Der Patch- und Upgrade-Plan erleichtert die Systemupgrade-Planung. Detaillierte Informationen finden Sie auf den Oracle-Support-Webseiten – geben Sie einfach „Get Proactive“ in die Suchmaske ein.

    Read the article

  • What is the structure of network managers system-connections files?

    - by Oyks Livede
    could anyone list the complete structure of the configuration files, which network manager stores for known networks in /etc/NetworkManager/system-connections for known networks? Sample (filename askUbuntu): [connection] id=askUbuntu uuid=81255b2e-bdf1-4bdb-b6f5-b94ef16550cd type=802-11-wireless [802-11-wireless] ssid=askUbuntu mode=infrastructure mac-address=00:08:CA:E6:76:D8 [ipv6] method=auto [ipv4] method=auto I would like to create some of them by my own using a script. However, before doing so I would like to know every possible option. Furthermore, this structure seems somehow to resemble the information you can get using the dbus for active connections. dbus-send --system --print-reply \ --dest=org.freedesktop.NetworkManager \ "$active_setting_path" \ # /org/freedesktop/NetworkManager/Settings/2 org.freedesktop.NetworkManager.Settings.Connection.GetSettings Will tell you: array [ dict entry( string "802-11-wireless" array [ dict entry( string "ssid" variant array of bytes "askUbuntu" ) dict entry( string "mode" variant string "infrastructure" ) dict entry( string "mac-address" variant array of bytes [ 00 08 ca e6 76 d8 ] ) dict entry( string "seen-bssids" variant array [ string "02:1A:11:F8:C5:64" string "02:1A:11:FD:1F:EA" ] ) ] ) dict entry( string "connection" array [ dict entry( string "id" variant string "askUbuntu" ) dict entry( string "uuid" variant string "81255b2e-bdf1-4bdb-b6f5-b94ef16550cd" ) dict entry( string "timestamp" variant uint64 1383146668 ) dict entry( string "type" variant string "802-11-wireless" ) ] ) dict entry( string "ipv4" array [ dict entry( string "addresses" variant array [ ] ) dict entry( string "dns" variant array [ ] ) dict entry( string "method" variant string "auto" ) dict entry( string "routes" variant array [ ] ) ] ) dict entry( string "ipv6" array [ dict entry( string "addresses" variant array [ ] ) dict entry( string "dns" variant array [ ] ) dict entry( string "method" variant string "auto" ) dict entry( string "routes" variant array [ ] ) ] ) ] I can create new setting files using the dbus (AddSettings() in /org/freedesktop/NetworkManager/Settings) passing this type of input, so explaining me this structure and telling me all possible options will also help. Afaik, this is a Dictionary{String, Dictionary{String, Variant}}. Will there be any difference creating config files directly or using the dbus?

    Read the article

  • delete key on linux does not work

    - by Gauthier Fleutot
    Hi! My delete key does not work in ubuntu, it does nothing. I understand that this is a common problem, but I could solve it with the information I found elsewhere. I ran xev. Pressing the 'a' key gives: KeyRelease event, serial 30, synthetic NO, window 0x2c00001, root 0x1a6, subw 0x0, time 7255643, (-113,-107), root:(425,300), state 0x2010, keycode 38 (keysym 0x61, a), same_screen YES, XLookupString gives 1 bytes: (61) "a" XFilterEvent returns: False Pressing 'Delete' gives: FocusOut event, serial 30, synthetic NO, window 0x2c00001, mode NotifyGrab, detail NotifyAncestor FocusIn event, serial 30, synthetic NO, window 0x2c00001, mode NotifyUngrab, detail NotifyAncestor KeymapNotify event, serial 30, synthetic NO, window 0x0, keys: 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 From there I don't know what to do. Help?

    Read the article

  • Securing ClickOnce hosted with Amazon S3 Storage

    - by saifkhan
    Well, since my post on hosting ClickOnce with Amazon S3 Storage, I've received quite a few emails asking how to secure the deployment. At the time of this post I regret to say that there is no way to secure your ClickOnce deployment hosted with Amazon S3. The S3 storage is secured by ACL meaning that a username and password will have to be provided before access. The Amazon CloudFront, which sits on top of S3, allows you to apply security settings to your CloudFront distribution by Applying an encryption to the URL. Restricting by IP. The problem with the CloudFront is that the encryption of the URL is mandatory. ClickOnce does not provide a way to pass the "Amazon Public Key" to the CloudFront URL (you probably can if you start editing the XML and HTML files ClickOnce generate but that defeats the porpose of ClickOnce all together). What would be nice is if Amazon can allow users to restrict by IP addresses or IP Blocks. I'd sent them an email and received a response that this is something they are looking into...I won't hold my breadth though. Alternative I suggest you look at Rack Space Cloud hosting http://www.rackspacecloud.com they have very competitive pricing and recently started hosting Windows Virtual Servers. What you can do is rent a virtual server, setup IIS to host your ClickOnce applications. You can then use IIS security setting to restrict what IP/Blocks can access your ClickOnce payloads. Note: You don't really need Windows Server to host ClickOnce. Any web server can do. If you are familiar with Linux you can run that VM with rackspace for half the price of Windows. I hope you found this information helpful.

    Read the article

< Previous Page | 498 499 500 501 502 503 504 505 506 507 508 509  | Next Page >