Search Results

Search found 48063 results on 1923 pages for 'how to create dynamically'.

Page 484/1923 | < Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >

  • Oracle E-Business Suite (WebADI) integration with Oracle Open Office

    - by Harald Behnke
    Another highlight of the new Oracle Open Office Release 3.3 enterprise features is the Oracle E-Business Suite Release 12.1 (WebADI) integration. The WebADI integration in Oracle Open Office for Windows allows you to bring your Oracle E-Business Suite data into an Oracle Open Office Calc spreadsheet, where familiar data entry and modeling techniques can be used to complete your E-Business Suite tasks. You can create formatted spreadsheets on your desktop that allow you to download, view, edit, and create Oracle E-Business Suite data. Use data entry shortcuts (such as copying and pasting or dragging and dropping ranges of cells), or Calc's Open Document Format (ODF V1.2) compliant spreadsheet formulas, to calculate amounts to save time. You can combine speed and accuracy by invoking lists of values for fields within the spreadsheet. After editing the spreadsheet, you can use WebADI's validation functionality to validate the data before uploading it to the Oracle E-Business Suite. Validation messages are returned to the spreadsheet, allowing you to identify and correct invalid. This video shows a hands-on demonstration of the Oracle E-Business Suite integration: Read more about the Oracle Open Office enterprise features.

    Read the article

  • Executing a Batch file from remote Ax client on an AOS server

    - by Anisha
    Is it possible to execute a batch file on an AOS server from a remote AX client? Answer is yes, provided you have necessary permission for this execution on the server. Please create a batch file on your AOS server. Some thing as below for creating a directory on the server.    Insert a command something like this in a .BAT file (batch file) and place any were on the server.   Mkdir “c:\test”      Copy the following code into your server static method of your class and call this piece of code from a button click on Ax form. Please execute this button click from a remote AX client and see the result . This should execute the batch file on the server and should create a directory called ‘test’ on the root directoryof the server.     server static void AOS_batch_file_create() { boolean b; System.Diagnostics.Process process; System.Diagnostics.ProcessStartInfo processStartInfo; ; b = Global::isRunningOnServer(); infolog.add(0, int2str(b)); new InteropPermission(InteropKind::ClrInterop).assert(); process = new System.Diagnostics.Process(); processStartInfo = new System.Diagnostics.ProcessStartInfo(); processStartInfo.set_FileName("C:\\create_dir.bat"); // batch file path on the AOS server process.set_StartInfo(processStartInfo); process.Start(); //process.Refresh(); //process.Close(); //process.WaitForExit(); info("Finished"); }

    Read the article

  • ASP.NET MVC Controllers & Actions In Regards To URLs And SEO

    - by user1066133
    The general idea is that if I were to create an MVC site, simple pages such as the contact and about pages will be placed under the Home Controller. So my URL would look like http://www.mysite.com/home/contact, and http://www.mysite.com/home/about. The above works just fine, but I really don't like the idea of having the "home" portion in the URL. So what negatives would there be if I decided to make a controller name of Contact and About and just added a single Index action so that way the URL would be simplified to http://www.mysite.com/contact and http://www.mysite.com/about. This method looks cleaner. Do any of you do the same or something similar? I've been trying to work on SEO for an escort service site I've developed and when you search for the females the link looks like http://www.mysite.com/escorts/female-escorts, and like-wise for males. I'm wondering if I should remove the Escorts Controller and just create a Female_Escorts Controller with an Index Action only so it comes out like the above as http://www.mysite.com/female-escorts.

    Read the article

  • HTML5 point and click adventure game code structure with CreateJS

    - by user1612686
    I'm a programming beginner. I made a tiny one scene point and click adventure game to try to understand simple game logic and came up with this: CreateJS features prototypes for creating bitmap images, sprites and sounds objects. I create them and define their properties in a corresponding function (for example images(); spritesheets(), sounds()...). I then create functions for each animation sequence and "game level" functions, which handle user interactions and play the according animations and sounds for a certain event (when the level is complete, the current level function calls the next level function). And I end up with quite the mess. What would be the "standard (if something like that exists)" OOP approach to structure simple game data and interactions like that? I thought about making game.images, game.sprites, game.sounds objects, which contain all the game data with its properties using CreateJS constructors. game.spriteAnimations and game.tweenAnimations objects for sprite animations and tweens and a game.levelN object, which communicates with a game.interaction object, processing user interaction. Does this make any sense? How do you structure your simple game code? Thanks in advance!

    Read the article

  • Use Your Chart-Drawing Skills to Win a Free Chrome Cr-48 Notebook

    - by ETC
    Today Google announced that they are partnering with a number of Chrome web application developers to distribute a number of their Chrome OS Notebooks to lucky fans. That’s when we noticed something interesting that can greatly increase your odds of getting one. Unlike Box, MOG, and Zoho, who are doing random giveaways, the LucidChart giveaway is based on a contest of skill – they are picking the best drawings using their flowchart tool and giving away Chrome Notebooks to the winners. So all you have to do is create one of the most interesting drawings / charts, and you will get your hands on one. We’ve also confirmed this with the fine people at LucidChart, who told us “any user who spends a bit of time and effort to do something creative has a good shot at winning one.” How great is the Chrome Cr-48 Notebook? What’s it all about? We wouldn’t know, since Google hasn’t given us here at How-To Geek an opportunity to use one, despite our attempts. It’s sad, since we’re huge fans of the Chrome browser, that we can’t share our Chrome notebook experiences with hundreds of thousands of daily subscribers and millions of monthly visitors. Hint. Hint. Win a Chrome Cr-48 notebook from LucidChart [LucidChart] Latest Features How-To Geek ETC How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop How Do You Block Annoying Text Message (SMS) Spam? How to Use and Master the Notoriously Difficult Pen Tool in Photoshop HTG Explains: What Are the Differences Between All Those Audio Formats? How To Use Layer Masks and Vector Masks to Remove Complex Backgrounds in Photoshop Bring Summer Back to Your Desktop with the LandscapeTheme for Chrome and Iron The Prospector – Home Dash Extension Creates a Whole New Browsing Experience in Firefox KinEmote Links Kinect to Windows Why Nobody Reads Web Site Privacy Policies [Infographic] Asian Temple in the Snow Wallpaper 10 Weird Gaming Records from the Guinness Book

    Read the article

  • OpenGL texture on sphere

    - by Cilenco
    I want to create a rolling, textured ball in OpenGL ES 1.0 for Android. With this function I can create a sphere: public Ball(GL10 gl, float radius) { ByteBuffer bb = ByteBuffer.allocateDirect(40000); bb.order(ByteOrder.nativeOrder()); sphereVertex = bb.asFloatBuffer(); points = build(); } private int build() { double dTheta = STEP * Math.PI / 180; double dPhi = dTheta; int points = 0; for(double phi = -(Math.PI/2); phi <= Math.PI/2; phi+=dPhi) { for(double theta = 0.0; theta <= (Math.PI * 2); theta+=dTheta) { sphereVertex.put((float) (raduis * Math.sin(phi) * Math.cos(theta))); sphereVertex.put((float) (raduis * Math.sin(phi) * Math.sin(theta))); sphereVertex.put((float) (raduis * Math.cos(phi))); points++; } } sphereVertex.position(0); return points; } public void draw() { texture.bind(); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, sphereVertex); gl.glDrawArrays(GL10.GL_TRIANGLE_FAN, 0, points); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); } My problem now is that I want to use this texture for the sphere but then only a black ball is created (of course because the top right corner s black). I use this texture coordinates because I want to use the whole texture: 0|0 0|1 1|1 1|0 That's what I learned from texturing a triangle. Is that incorrect if I want to use it with a sphere? What do I have to do to use the texture correctly?

    Read the article

  • What are Web runtime environments and programming languages

    - by Bradly Spicer
    I've been looking into the details behind these two different categories: Web runtime environments Web application programming languages I believe I have the correct information and have phrased it correctly but I am unsure. I have been searching for a while but only find snippets of information or what I can see as useless information (I could be wrong). Here are my descriptions so far: Web runtime environments - A Run-time environment implements part of the core behaviour of any computer language and allows it to be modified via an API or embedded domain-specific language. A web runtime environment is similar except it uses web based languages such as Java-script which utilises the core behaviour a computer language. Another example of a Run-time environment web language is JsLibs which is a standable JavaScript development runtime environment for using JavaScript as a general all round scripting language. JavaScript is often used to create responsive interfaces which improve the user experience and provide dynamic functionality without having to wait for the server to react and direct to another page. Web application programming languages - A web application program language is something that mimics a traditional desktop application within a web page. For example, using PHP you can create forms and tables which use a database similar to that of Microsoft Excel. Some of the other languages for web application programming are: Ajax Perl Ruby Here are some of the resources used: http://en.wikipedia.org/wiki/Web_application_development http://code.google.com/p/jslibs/ I would like some confirmation that the descriptions I have created are correct as I am still slightly unsure as to whether I have hit the nail on the head.

    Read the article

  • release management system - architectural question

    - by Sonic Soul
    Every place i've worked created their own release process, and all of them worked pretty well, however it took pretty good effort (and often a dedicated team) to manage releases. I am currently at a new place, and about to design such a system, however this time the team is very lean and we won't have dedicated resources to releasing. It will be up to development manager until the system is proofed enough for other developers to use. we're using Subversion as code repository, Team City as the build server, Jira issue tracker, Oracle db. I was thinking about writing a basic workflow app, that will let developers create a new manifest which will specify the following items. release details (who, jira issues etc) workflow step (dev, test, uat, prod approved, prod released) source files that last item is where it can get hairy. especially with database scripts. Figured I'd ask if there is a good pattern, or off the shelf product that could help with the database part, or perhaps the whole process. I briefly tested Red Gate Oracle deployment tool, but it didn't work out as well as I had hoped (from 1 day of testing at least) Questions: I think I could get around releasing of our code with something like Octopus Deploy straight from Team City. I am not clear however, how I could create a simple database deployment part, that will track which version of which script (from subversion) has been deployed where. Is there already some utility I could utilize for navigating subversion to choose which scripts should be released, instead of writing one from scratch. I'd just need it to produce some manifest of paths + versions.

    Read the article

  • Tutorial: Getting Started with the NoSQL JavaScript / Node.js API for MySQL Cluster

    - by Mat Keep
    Tutorial authored by Craig Russell and JD Duncan  The MySQL Cluster team are working on a new NoSQL JavaScript connector for MySQL. The objectives are simplicity and high performance for JavaScript users: - allows end-to-end JavaScript development, from the browser to the server and now to the world's most popular open source database - native "NoSQL" access to the storage layer without going first through SQL transformations and parsing. Node.js is a complete web platform built around JavaScript designed to deliver millions of client connections on commodity hardware. With the MySQL NoSQL Connector for JavaScript, Node.js users can easily add data access and persistence to their web, cloud, social and mobile applications. While the initial implementation is designed to plug and play with Node.js, the actual implementation doesn't depend heavily on Node, potentially enabling wider platform support in the future. Implementation The architecture and user interface of this connector are very different from other MySQL connectors in a major way: it is an asynchronous interface that follows the event model built into Node.js. To make it as easy as possible, we decided to use a domain object model to store the data. This allows for users to query data from the database and have a fully-instantiated object to work with, instead of having to deal with rows and columns of the database. The domain object model can have any user behavior that is desired, with the NoSQL connector providing the data from the database. To make it as fast as possible, we use a direct connection from the user's address space to the database. This approach means that no SQL (pun intended) is needed to get to the data, and no SQL server is between the user and the data. The connector is being developed to be extensible to multiple underlying database technologies, including direct, native access to both the MySQL Cluster "ndb" and InnoDB storage engines. The connector integrates the MySQL Cluster native API library directly within the Node.js platform itself, enabling developers to seamlessly couple their high performance, distributed applications with a high performance, distributed, persistence layer delivering 99.999% availability. The following sections take you through how to connect to MySQL, query the data and how to get started. Connecting to the database A Session is the main user access path to the database. You can get a Session object directly from the connector using the openSession function: var nosql = require("mysql-js"); var dbProperties = {     "implementation" : "ndb",     "database" : "test" }; nosql.openSession(dbProperties, null, onSession); The openSession function calls back into the application upon creating a Session. The Session is then used to create, delete, update, and read objects. Reading data The Session can read data from the database in a number of ways. If you simply want the data from the database, you provide a table name and the key of the row that you want. For example, consider this schema: create table employee (   id int not null primary key,   name varchar(32),   salary float ) ENGINE=ndbcluster; Since the primary key is a number, you can provide the key as a number to the find function. function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find('employee', 0, onData); }; function onData = function(err, data) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(data));   ... use data in application }; If you want to have the data stored in your own domain model, you tell the connector which table your domain model uses, by specifying an annotation, and pass your domain model to the find function. var annotations = new nosql.Annotations(); function Employee = function(id, name, salary) {   this.id = id;   this.name = name;   this.salary = salary;   this.giveRaise = function(percent) {     this.salary *= percent;   } }; annotations.mapClass(Employee, {'table' : 'employee'}); function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find(Employee, 0, onData); }; Updating data You can update the emp instance in memory, but to make the raise persistent, you need to write it back to the database, using the update function. function onData = function(err, emp) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(emp));   emp.giveRaise(0.12); // gee, thanks!   session.update(emp); // oops, session is out of scope here }; Using JavaScript can be tricky because it does not have the concept of block scope for variables. You can create a closure to handle these variables, or use a feature of the connector to remember your variables. The connector api takes a fixed number of parameters and returns a fixed number of result parameters to the callback function. But the connector will keep track of variables for you and return them to the callback. So in the above example, change the onSession function to remember the session variable, and you can refer to it in the onData function: function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find(Employee, 0, onData, session); }; function onData = function(err, emp, session) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(emp));   emp.giveRaise(0.12); // gee, thanks!   session.update(emp, onUpdate); // session is now in scope }; function onUpdate = function(err, emp) {   if (err) {     console.log(err);     ... error handling   } Inserting data Inserting data requires a mapped JavaScript user function (constructor) and a session. Create a variable and persist it: function onSession = function(err, session) {   var data = new Employee(999, 'Mat Keep', 20000000);   session.persist(data, onInsert);   } }; Deleting data To remove data from the database, use the session remove function. You use an instance of the domain object to identify the row you want to remove. Only the key field is relevant. function onSession = function(err, session) {   var key = new Employee(999);   session.remove(Employee, onDelete);   } }; More extensive queries We are working on the implementation of more extensive queries along the lines of the criteria query api. Stay tuned. How to evaluate The MySQL Connector for JavaScript is available for download from labs.mysql.com. Select the build: MySQL-Cluster-NoSQL-Connector-for-Node-js You can also clone the project on GitHub Since it is still early in development, feedback is especially valuable (so don't hesitate to leave comments on this blog, or head to the MySQL Cluster forum). Try it out and see how easy (and fast) it is to integrate MySQL Cluster into your Node.js platforms. You can learn more about other previewed functionality of MySQL Cluster 7.3 here

    Read the article

  • Problems Using CloudFlare On Blogger

    - by the_archer
    Here's the situation. I got a TLD for my blogger blog and set it up using the instructions from blogger. Blogger asks to: Add two CNAME records. For the first CNAME, where it says Name, Label or Host enter "www" and where it says Destination, Target orPoints To enter "ghs.google.com" . For the second CNAME, enter "NHRILA4K2RJG" as the Name and "gv-GQMUMYGHAMJWECXFLJXVXABIV23C55JIPNIAVD5IGFSXT653O5GA.domainverify.googlehosted.com." I did that on my domain host, and everything was working smoothly. Here's the things that happened: Typing myblog.blogspot.com in the address bar brought me to my new address www.mynewaddress.tld Typing my newaddress.tld brings me to www.mynewaddress.tld Now, I went through the instruction to setup CloudFlare and did everything as required. I saw that CloudFlare is active and working on my TLD www.mynewaddress.tld, however, when I am typing the blogspot address, i.e. myblog.blogspot.com, it's showing a notice that the blog is not hosted on blogger and that I should click "yes" to get redirected to the new website. However, the blog is still on blogger. I think the problem might be with this particular CNAME record Google asks to create, which I did not find imported to the CloudFlare nameservers: For the second CNAME, enter "NHRILA4K2RJG" as the Name and "gv-GQMUMYGHAMJWECXFLJXVXABIV23C55JIPNIAVD5IGFSXT653O5GA.domainverify.googlehosted.com." So I create that CNAME and added it to the CloudFlare panel. My question is - is that what will help Google determine that my blog is still hosted on Blogger? If so, should I turn off CloudFlare for that particular CNAME record or turn it on? Any help is very much appreciated :)

    Read the article

  • Unit testing internal methods in a strongly named assembly/project

    - by Rohit Gupta
    If you need create Unit tests for internal methods within a assembly in Visual Studio 2005 or greater, then we need to add an entry in the AssemblyInfo.cs file of the assembly for which you are creating the units tests for. For e.g. if you need to create tests for a assembly named FincadFunctions.dll & this assembly contains internal/friend methods within which need to write unit tests for then we add a entry in the FincadFunctions.dll’s AssemblyInfo.cs file like so : 1: [assembly: System.Runtime.CompilerServices.InternalsVisibleTo("FincadFunctionsTests")] where FincadFunctionsTests is the name of the Unit Test project which contains the Unit Tests. However if the FincadFunctions.dll is a strongly named assembly then you will the following error when compiling the FincadFunctions.dll assembly :      Friend assembly reference “FincadFunctionsTests” is invalid. Strong-name assemblies must specify a public key in their InternalsVisibleTo declarations. Thus to add a public key token to InternalsVisibleTo Declarations do the following: You need the .snk file that was used to strong-name the FincadFunctions.dll assembly. You can extract the public key from this .snk with the sn.exe tool from the .NET SDK. First we extract just the public key from the key pair (.snk) file into another .snk file. sn -p test.snk test.pub Then we ask for the value of that public key (note we need the long hex key not the short public key token): sn -tp test.pub We end up getting a super LONG string of hex, but that's just what we want, the public key value of this key pair. We add it to the strongly named project "FincadFunctions.dll" that we want to expose our internals from. Before what looked like: 1: [assembly: System.Runtime.CompilerServices.InternalsVisibleTo("FincadFunctionsTests")] Now looks like. 1: [assembly: System.Runtime.CompilerServices.InternalsVisibleTo("FincadFunctionsTests, 2: PublicKey=002400000480000094000000060200000024000052534131000400000100010011fdf2e48bb")] And we're done. hope this helps

    Read the article

  • Can mediatomb VLC profile transcode audio as MP3 rather than mpga ?

    - by djangofan
    In the /etc/mediatomb/config.xml, can mediatomb VLC profile transcode audo as MP3 rather than mpga ? My Sony GoogleTV wont render streamed .avi files files with a mpga audio in them. The original files are Divx encoded with 128kbs MP3 audio but mediatomb is transcoding them. How can I change this? Any ideas? Can I turn off the audio and video transcoding somehow? I need some ideas to try. <profile name="vlcprof" enabled="yes" type="external"> <mimetype>video/mpeg</mimetype> <agent command="vlc" arguments="-I dummy %in --sout #transcode{venc=ffmpeg,vcodec=mp2v,vb=4096,fps=25,aenc=ffmpeg,acodec=mpga,ab=192,samplerate=44100,channels=2}:standard{access=file,mux=ps,dst=%out} vlc:quit"/> <buffer size="10485760" chunk-size="131072" fill-size="2621440"/> <accept-url>yes</accept-url> <first-resource>yes</first-resource> </profile> I know that that MP3 encoding support is external to FFmpeg and must be configured appropriately, but I have no idea how to handle that. I would guess I can work around that by somehow telling ffmpeg to not transcode the audio stream? Also, should I create a separate vlcprof entry for video/avi ? Can you create more than one profile for VLC in the config.xml for mediatomb?

    Read the article

  • Lucene and .NET Part I

    - by javarg
    I’ve playing around with Lucene.NET and trying to get a feeling of what was required to develop and implement a full business application using it. As you would imagine, many things are required for you to implement a robust solution for indexing content and searching it afterwards. Lucene is a great and robust solution for indexing content. It offers fast and performance enhanced search engine library available in Java and .NET. You will want to use this library in many particular scenarios: In Windows Azure, to support Full Text Search (a functionality not currently supported by SQL Azure) When storing files outside or not managed by your database (like in large document storage solutions that uses File System) When Full Text Search is not really what you need Lucene is more than a Full Text Search solution. It has several analyzers that let you process and search content in different ways (decomposing sentences, deriving words, removing articles, etc.). When deciding to implement indexing using Lucene, you will need to take into account the following: How content is to be indexed by Lucene and when. Using a service that runs after a specific interval Immediately when content changes When content is to available for searching / Availability of indexed content (as in real time content search) Immediately when content changes = near real time searching After a few minutes.. Ease of maintainability and development Some Technical Concerns.. When indexing content, indexes are locked for writing operations by the Index Writer. This means that Lucene is best designed to index content using single writer approach. When searching, Index Readers take a snapshot of indexes. This has the following implications: Setting up an index reader is a costly task. Your are not supposed to create one for each query or search. A good practice is to create readers and reuse them for several searches. The latter means that even when the content gets updated, you wont be able to see the changes. You will need to recycle the reader. In the second part of this post we will review some alternatives and design considerations.

    Read the article

  • Windows Azure Mobile Services: New support for iOS apps, Facebook/Twitter/Google identity, Emails, SMS, Blobs, Service Bus and more

    - by ScottGu
    A few weeks ago I blogged about Windows Azure Mobile Services - a new capability in Windows Azure that makes it incredibly easy to connect your client and mobile applications to a scalable cloud backend. Earlier today we delivered a number of great improvements to Windows Azure Mobile Services.  New features include: iOS support – enabling you to connect iPhone and iPad apps to Mobile Services Facebook, Twitter, and Google authentication support with Mobile Services Blob, Table, Queue, and Service Bus support from within your Mobile Service Sending emails from your Mobile Service (in partnership with SendGrid) Sending SMS messages from your Mobile Service (in partnership with Twilio) Ability to deploy mobile services in the West US region All of these improvements are now live in production and available to start using immediately. Below are more details on them: iOS Support This week we delivered initial support for connecting iOS based devices (including iPhones and iPads) to Windows Azure Mobile Services.  Like the rest of our Windows Azure SDK, we are delivering the native iOS libraries to enable this under an open source (Apache 2.0) license on GitHub.  We’re excited to get your feedback on this new library through our forum and GitHub issues list, and we welcome contributions to the SDK. To create a new iOS app or connect an existing iOS app to your Mobile Service, simply select the “iOS” tab within the Quick Start view of a Mobile Service within the Windows Azure Portal – and then follow either the “Create a new iOS app” or “Connect to an existing iOS app” link below it: Clicking either of these links will expand and display step-by-step instructions for how to build an iOS application that connects with your Mobile Service: Read this getting started tutorial to walkthrough how you can build (in less than 5 minutes) a simple iOS “Todo List” app that stores data in Windows Azure.  Then follow the below tutorials to explore how to use the iOS client libraries to store data and authenticate users. Get Started with data in Mobile Services for iOS Get Started with authentication in Mobile Services for iOS Facebook, Twitter, and Google Authentication Support Our initial preview of Mobile Services supported the ability to authenticate users of mobile apps using Microsoft Accounts (formerly called Windows Live ID accounts).  This week we are adding the ability to also authenticate users using Facebook, Twitter, and Google credentials.  These are now supported with both Windows 8 apps as well as iOS apps (and a single app can support multiple forms of identity simultaneously – so you can offer your users a choice of how to login). The below tutorials walkthrough how to register your Mobile Service with an identity provider: How to register your app with Microsoft Account How to register your app with Facebook How to register your app with Twitter How to register your app with Google The tutorials above walkthrough how to obtain a client ID and a secret key from the identity provider. You can then click on the “Identity” tab of your Mobile Service (within the Windows Azure Portal) and save these values to enable server-side authentication with your Mobile Service: You can then write code within your client or mobile app to authenticate your users to the Mobile Service.  For example, below is the code you would write to have them login to the Mobile Service using their Facebook credentials: Windows Store App (using C#): var user = await App.MobileService                     .LoginAsync(MobileServiceAuthenticationProvider.Facebook); iOS app (using Objective C): UINavigationController *controller = [self.todoService.client     loginViewControllerWithProvider:@"facebook"     completion:^(MSUser *user, NSError *error) {        //... }]; Learn more about authenticating Mobile Services using Microsoft Account, Facebook, Twitter, and Google from these tutorials: Get started with authentication in Mobile Services for Windows Store (C#) Get started with authentication in Mobile Services for Windows Store (JavaScript) Get started with authentication in Mobile Services for iOS Using Windows Azure Blob, Tables and ServiceBus with your Mobile Services Mobile Services provide a simple but powerful way to add server logic using server scripts. These scripts are associated with the individual CRUD operations on your mobile service’s tables. Server scripts are great for data validation, custom authorization logic (e.g. does this user participate in this game session), augmenting CRUD operations, sending push notifications, and other similar scenarios.   Server scripts are written in JavaScript and are executed in a secure server-side scripting environment built using Node.js.  You can edit these scripts and save them on the server directly within the Windows Azure Portal: In this week’s release we have added the ability to work with other Windows Azure services from your Mobile Service server scripts.  This is supported using the existing “azure” module within the Windows Azure SDK for Node.js.  For example, the below code could be used in a Mobile Service script to obtain a reference to a Windows Azure Table (after which you could query it or insert data into it):     var azure = require('azure');     var tableService = azure.createTableService("<< account name >>",                                                 "<< access key >>"); Follow the tutorials on the Windows Azure Node.js dev center to learn more about working with Blob, Tables, Queues and Service Bus using the azure module. Sending emails from your Mobile Service In this week’s release we have also added the ability to easily send emails from your Mobile Service, building on our partnership with SendGrid. Whether you want to add a welcome email upon successful user registration, or make your app alert you of certain usage activities, you can do this now by sending email from Mobile Services server scripts. To get started, sign up for SendGrid account at http://sendgrid.com . Windows Azure customers receive a special offer of 25,000 free emails per month from SendGrid. To sign-up for this offer, or get more information, please visit http://www.sendgrid.com/azure.html . One you signed up, you can add the following script to your Mobile Service server scripts to send email via SendGrid service:     var sendgrid = new SendGrid('<< account name >>', '<< password >>');       sendgrid.send({         to: '<< enter email address here >>',         from: '<< enter from address here >>',         subject: 'New to-do item',         text: 'A new to-do was added: ' + item.text     }, function (success, message) {         if (!success) {             console.error(message);         }     }); Follow the Send email from Mobile Services with SendGrid tutorial to learn more. Sending SMS messages from your Mobile Service SMS is a key communication medium for mobile apps - it comes in handy if you want your app to send users a confirmation code during registration, allow your users to invite their friends to install your app or reach out to mobile users without a smartphone. Using Mobile Service server scripts and Twilio’s REST API, you can now easily send SMS messages to your app.  To get started, sign up for Twilio account. Windows Azure customers receive 1000 free text messages when using Twilio and Windows Azure together. Once signed up, you can add the following to your Mobile Service server scripts to send SMS messages:     var httpRequest = require('request');     var account_sid = "<< account SID >>";     var auth_token = "<< auth token >>";       // Create the request body     var body = "From=" + from + "&To=" + to + "&Body=" + message;       // Make the HTTP request to Twilio     httpRequest.post({         url: "https://" + account_sid + ":" + auth_token +              "@api.twilio.com/2010-04-01/Accounts/" + account_sid + "/SMS/Messages.json",         headers: { 'content-type': 'application/x-www-form-urlencoded' },         body: body     }, function (err, resp, body) {         console.log(body);     }); I’m excited to be speaking at the TwilioCon conference this week, and will be showcasing some of the cool scenarios you can now enable with Twilio and Windows Azure Mobile Services. Mobile Services availability in West US region Our initial preview of Windows Azure Mobile Services was only supported in the US East region of Windows Azure.  As with every Windows Azure service, overtime we will extend Mobile Services to all Windows Azure regions. With this week’s preview update we’ve added support so that you can now create your Mobile Service in the West US region as well: Summary The above features are all now live in production and are available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using Mobile Services today. Visit the Windows Azure Mobile Developer Center to learn more about how to build apps with Mobile Services. We’ll have even more new features and enhancements coming later this week – including .NET 4.5 support for Windows Azure Web Sites.  Keep an eye out on my blog for details as new features become available. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • 2D soft-body physics engines?

    - by Griffin
    Hi so i've recently learned the SFML graphics library and would like to use or make a non-rigid body 2D physics system to use with it. I have three questions: The definition of rigid body in Box2d is A chunk of matter that is so strong that the distance between any two bits of matter on the chunk is completely constant. And this is exactly what i don't want as i would like to make elastic, deformable, breakable, and re-connection bodies. 1. Are there any simple 2D physics engines, but with these kinds of characteristics out there? preferably free or opensource? 2. If not could i use box2d and work off of it to create it even if it's based on rigid bodies? 3. Finally, if there is a simple physics engine like this, should i go through with the proccess of creating a new one anyway, simply for experience and to enhance physics math knowledge? I feel like it would help if i ever wanted to modify the code of an existing engine, or create a game with really unique physics. Thanks!

    Read the article

  • Creating Transparent Game Menu Items using AndEngine

    - by Chaitanya Chandurkar
    I'm trying to create a Game Menu which contains some Menu Items like New Game Multiplayer Options Exit I want to make this Menu Items Transparent. Only Text in White color should be visible. So i guess i do not need any background image for Menu Items. I have seen examples of SpriteButton like given below. ButtonSprite playButton = new ButtonSprite(0, 0, btnNormalTextureRegion, btnPushedTextureRegion, this.getVertexBufferObjectManager(), new OnClickListener() { @Override public boolean onAreaTouched(TouchEvent pSceneTouchEvent, float pTouchAreaLocalX, float pTouchAreaLocalY) { // Do Stuff here } } The thing which i don't understand is how can i initialize btnNormalTextureRegion? I use the code give below to initialize ITexture and ITextureRegion for objects. mBackgruondTexture = new BitmapTexture(activity.getTextureManager(), new IInputStreamOpener() { public InputStream open() throws IOException { return activity.getAssets().open("gfx/backgrounds/greenbg.jpg"); } }); mBackgruondTextureRegion = TextureRegionFactory.extractFromTexture(mBackgruondTexture); This code openes up an Image from assest. As i do not want to use any image for Menu Item How can i initialize btnNormalTextureRegion for SpriteButton. OR Is there any alternative to create Game Menu?

    Read the article

  • Slick 2D first trial error

    - by pringlesinn
    I followed some advices to learn Slick2D and when I started doing the "SimpleGame" I got my first error. Does anyone have any idea of what is it and how to fix? Sun Dec 26 23:09:12 GMT-03:00 2010 INFO:Slick Build #274 Sun Dec 26 23:09:12 GMT-03:00 2010 INFO:LWJGL Version: 2.0b1 Sun Dec 26 23:09:12 GMT-03:00 2010 INFO:OriginalDisplayMode: 1024 x 768 x 16 @60Hz Sun Dec 26 23:09:12 GMT-03:00 2010 INFO:TargetDisplayMode: 800 x 600 x 0 @0Hz Sun Dec 26 23:09:12 GMT-03:00 2010 ERROR:Could not find a valid pixel format org.lwjgl.LWJGLException: Could not find a valid pixel format at org.lwjgl.opengl.WindowsPeerInfo.nChoosePixelFormat(Native Method) at org.lwjgl.opengl.WindowsPeerInfo.choosePixelFormat(WindowsPeerInfo.java:52) at org.lwjgl.opengl.WindowsDisplayPeerInfo.initDC(WindowsDisplayPeerInfo.java:54) at org.lwjgl.opengl.WindowsDisplay.createWindow(WindowsDisplay.java:158) at org.lwjgl.opengl.Display.createWindow(Display.java:299) at org.lwjgl.opengl.Display.create(Display.java:848) at org.lwjgl.opengl.Display.create(Display.java:800) at org.newdawn.slick.AppGameContainer.tryCreateDisplay(AppGameContainer.java:299) at org.newdawn.slick.AppGameContainer.access$000(AppGameContainer.java:34) at org.newdawn.slick.AppGameContainer$2.run(AppGameContainer.java:364) at java.security.AccessController.doPrivileged(Native Method) at org.newdawn.slick.AppGameContainer.setup(AppGameContainer.java:345) at org.newdawn.slick.AppGameContainer.start(AppGameContainer.java:314) at SimpleGame.main(SimpleGame.java:38) Exception in thread "main" org.newdawn.slick.SlickException: Failed to initialise the LWJGL display at org.newdawn.slick.AppGameContainer.setup(AppGameContainer.java:375) at org.newdawn.slick.AppGameContainer.start(AppGameContainer.java:314) at SimpleGame.main(SimpleGame.java:38)

    Read the article

  • Ruby on Rails - Belongs_to and Active Admin not creating foreign ID

    - by Milo
    I have the following setup: class Category < ActiveRecord::Base has_many :products end class Product < ActiveRecord::Base belongs_to :category has_attached_file :photo, :styles => { :medium => "300x300>", :thumb => "200x200>" } validates_attachment_content_type :photo, :content_type => /\Aimage\/.*\Z/ end ActiveAdmin.register Product do permit_params :title, :price, :slideshow, :photo, :category form do |f| f.inputs "Product Details" do f.input :title f.input :category f.input :price f.input :slideshow f.input :photo, :required => false, :as => :file end f.actions end show do |ad| attributes_table do row :title row :category row :photo do image_tag(ad.photo.url(:medium)) end end end index do column :id column :title column :category column :price do |product| number_to_currency product.price end actions end class ProductController < ApplicationController def create @product = Product.create(params[:id]) end end Every time I make a product item in activeadmin the category comes up empty. It wont populate the column category_id for the product. It just leaves it empty. What am I doing wrong?

    Read the article

  • Monster's AI in an Action-RPG

    - by Andrea Tucci
    I'm developing an action rpg with some University colleagues. We've gotton to the monsters' AI design and we would like to implement a sort of "utility-based AI" so we have a "thinker" that assigns a numeric value on all the monster's decisions and we choose the highest (or the most appropriate, depending on monster's iq) and assign it in the monster's collection of decisions (like a goal-driven design pattern) . One solution we found is to write a mathematical formula for each decision, with all the important parameters for evaluation (so for a spell-decision we might have mp,distance from player, player's hp etc). This formula also has coefficients representing some of monster's behaviour (in this way we can alterate formulas by changing coefficients). I've also read how "fuzzy logic" works; I was fascinated by it and by the many ways of expansion it has. I was wondering how we could use this technique to give our AI more semplicity, as in create evaluations with fuzzy rules such as IF player_far AND mp_high AND hp_high THEN very_Desiderable (for a spell having an high casting-time and consume high mp) and then 'defuzz' it. In this way it's also simple to create a monster behaviour by creating ad-hoc rules for every monster's IQ category. But is it correct using fuzzy logic in a game with many parameters like an rpg? Is there a way of merging these two techniques? Are there better AI design techniques for evaluating monster's chooses?

    Read the article

  • Caveats with the runAllManagedModulesForAllRequests in IIS 7/8

    - by Rick Strahl
    One of the nice enhancements in IIS 7 (and now 8) is the ability to be able to intercept non-managed - ie. non ASP.NET served - requests from within ASP.NET managed modules. This opened up a ton of new functionality that could be applied across non-managed content using .NET code. I thought I had a pretty good handle on how IIS 7's Integrated mode pipeline works, but when I put together some samples last tonight I realized that the way that managed and unmanaged requests fire into the pipeline is downright confusing especially when it comes to the runAllManagedModulesForAllRequests attribute. There are a number of settings that can affect whether a managed module receives non-ASP.NET content requests such as static files or requests from other frameworks like PHP or ASP classic, and this is topic of this blog post. Native and Managed Modules The integrated mode IIS pipeline for IIS 7 and later - as the name suggests - allows for integration of ASP.NET pipeline events in the IIS request pipeline. Natively IIS runs unmanaged code and there are a host of native mode modules that handle the core behavior of IIS. If you set up a new IIS site or application without managed code support only the native modules are supported and fired without any interaction between native and managed code. If you use the Integrated pipeline with managed code enabled however things get a little more confusing as there both native modules and .NET managed modules can fire against the same IIS request. If you open up the IIS Modules dialog you see both managed and unmanaged modules. Unmanaged modules point at physical files on disk, while unmanaged modules point at .NET types and files referenced from the GAC or the current project's BIN folder. Both native and managed modules can co-exist and execute side by side and on the same request. When running in IIS 7 the IIS pipeline actually instantiates a the ASP.NET  runtime (via the System.Web.PipelineRuntime class) which unlike the core HttpRuntime classes in ASP.NET receives notification callbacks when IIS integrated mode events fire. The IIS pipeline is smart enough to detect whether managed handlers are attached and if they're none these notifications don't fire, improving performance. The good news about all of this for .NET devs is that ASP.NET style modules can be used for just about every kind of IIS request. All you need to do is create a new Web Application and enable ASP.NET on it, and then attach managed handlers. Handlers can look at ASP.NET content (ie. ASPX pages, MVC, WebAPI etc. requests) as well as non-ASP.NET content including static content like HTML files, images, javascript and css resources etc. It's very cool that this capability has been surfaced. However, with that functionality comes a lot of responsibility. Because every request passes through the ASP.NET pipeline if managed modules (or handlers) are attached there are possible performance implications that come with it. Running through the ASP.NET pipeline does add some overhead. ASP.NET and Your Own Modules When you create a new ASP.NET project typically the Visual Studio templates create the modules section like this: <system.webServer> <validation validateIntegratedModeConfiguration="false" /> <modules runAllManagedModulesForAllRequests="true" > </modules> </system.webServer> Specifically the interesting thing about this is the runAllManagedModulesForAllRequest="true" flag, which seems to indicate that it controls whether any registered modules always run, even when the value is set to false. Realistically though this flag does not control whether managed code is fired for all requests or not. Rather it is an override for the preCondition flag on a particular handler. With the flag set to the default true setting, you can assume that pretty much every IIS request you receive ends up firing through your ASP.NET module pipeline and every module you have configured is accessed even by non-managed requests like static files. In other words, your module will have to handle all requests. Now so far so obvious. What's not quite so obvious is what happens when you set the runAllManagedModulesForAllRequest="false". You probably would expect that immediately the non-ASP.NET requests no longer get funnelled through the ASP.NET Module pipeline. But that's not what actually happens. For example, if I create a module like this:<add name="SharewareModule" type="HowAspNetWorks.SharewareMessageModule" /> by default it will fire against ALL requests regardless of the runAllManagedModulesForAllRequests flag. Even if the value runAllManagedModulesForAllRequests="false", the module is fired. Not quite expected. So what is the runAllManagedModulesForAllRequests really good for? It's essentially an override for managedHandler preCondition. If I declare my handler in web.config like this:<add name="SharewareModule" type="HowAspNetWorks.SharewareMessageModule" preCondition="managedHandler" /> and the runAllManagedModulesForAllRequests="false" my module only fires against managed requests. If I switch the flag to true, now my module ends up handling all IIS requests that are passed through from IIS. The moral of the story here is that if you intend to only look at ASP.NET content, you should always set the preCondition="managedHandler" attribute to ensure that only managed requests are fired on this module. But even if you do this, realize that runAllManagedModulesForAllRequests="true" can override this setting. runAllManagedModulesForAllRequests and Http Application Events Another place the runAllManagedModulesForAllRequest attribute affects is the Global Http Application object (typically in global.asax) and the Application_XXXX events that you can hook up there. So while the events there are dynamically hooked up to the application class, they basically behave as if they were set with the preCodition="managedHandler" configuration switch. The end result is that if you have runAllManagedModulesForAllRequests="true" you'll see every Http request passed through the Application_XXXX events, and you only see ASP.NET requests with the flag set to "false". What's all that mean? Configuring an application to handle requests for both ASP.NET and other content requests can be tricky especially if you need to mix modules that might require both. Couple of things are important to remember. If your module doesn't need to look at every request, by all means set a preCondition="managedHandler" on it. This will at least allow it to respond to the runAllManagedModulesForAllRequests="false" flag and then only process ASP.NET requests. Look really carefully to see whether you actually need runAllManagedModulesForAllRequests="true" in your applications as set by the default new project templates in Visual Studio. Part of the reason, this is the default because it was required for the initial versions of IIS 7 and ASP.NET 2 in order to handle MVC extensionless URLs. However, if you are running IIS 7 or later and .NET 4.0 you can use the ExtensionlessUrlHandler instead to allow you MVC functionality without requiring runAllManagedModulesForAllRequests="true": <handlers> <remove name="ExtensionlessUrlHandler-Integrated-4.0" /> <add name="ExtensionlessUrlHandler-Integrated-4.0" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS" type="System.Web.Handlers.TransferRequestHandler" preCondition="integratedMode,runtimeVersionv4.0" /> </handlers> Oddly this is the default for Visual Studio 2012 MVC template apps, so I'm not sure why the default template still adds runAllManagedModulesForAllRequests="true" is - it should be enabled only if there's a specific need to access non ASP.NET requests. As a side note, it's interesting that when you access a static HTML resource, you can actually write into the Response object and get the output to show, which is trippy. I haven't looked closely to see how this works - whether ASP.NET just fires directly into the native output stream or whether the static requests are re-routed directly through the ASP.NET pipeline once a managed code module is detected. This doesn't work for all non ASP.NET resources - for example, I can't do the same with ASP classic requests, but it makes for an interesting demo when injecting HTML content into a static HTML page :-) Note that on the original Windows Server 2008 and Vista (IIS 7.0) you might need a HotFix in order for ExtensionLessUrlHandler to work properly for MVC projects. On my live server I needed it (about 6 months ago), but others have observed that the latest service updates have integrated this functionality and the hotfix is not required. On IIS 7.5 and later I've not needed any patches for things to just work. Plan for non-ASP.NET Requests It's important to remember that if you write a .NET Module to run on IIS 7, there's no way for you to prevent non-ASP.NET requests from hitting your module. So make sure you plan to support requests to extensionless URLs, to static resources like files. Luckily ASP.NET creates a full Request and full Response object for you for non ASP.NET content. So even for static files and even for ASP classic for example, you can look at Request.FilePath or Request.ContentType (in post handler pipeline events) to determine what content you are dealing with. As always with Module design make sure you check for the conditions in your code that make the module applicable and if a filter fails immediately exit - minimize the code that runs if your module doesn't need to process the request.© Rick Strahl, West Wind Technologies, 2005-2012Posted in IIS7   ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • JDBC Connection Pools in Glassfish

    - by Dana Singleterry
    I've been attempting to configure Glassfish 3.1.2.2 for ADF 11g and the need arose to create a jdbc connection pool to my Oracle XE 11g database. While this is really very trivial there were no samples of how to do this and documentation, while good, rarely ever provides concrete examples. After fumbling around for a few minutes searching for an example I gave up and figured it out on my own. Here are the steps for any of you that may be in need. This can be done either via the Glassfish command line tool asadmin or through the admin console. I'm doing this through the admin console. Start Glassfish and connect to the admin console with the credentials you defined at installation: http://localhost:4848 Navigate to Resources | JDBC | JDBC Connection Pools and select New. Be sure to enter Resource Type & Datasource Classname under General Settings tab. You can go with the defaults for Pool Settings etc... View Image Go to the Additional Properties tab and create username, password, and url properties with the respective values. View Image Navigate to Resources | JDBC | JDBC Resources and select New. Be sure to enter the JNDI Name and select the Pool Name for the jdbc connection pool you created previously. View Image Navigate to Configurations | server-config | JVM Settings and select the JVM Options tab. Add the values highlighted: -Doracle.jdbc.J2EE13Compliant=true is used to make sure the driver behaves in a JEE-compliant manner. View Image To integrate the JDBC driver into a GlassFish Server domain, copy the JAR files into the domain-dir/lib directory, then restart the server. The JAR file for the Oracle 11 database driver is ojdbc6dms.jar. An upcoming entry will demonstrate configuring Glassfish for Oracle ADF Applications.

    Read the article

  • Recommended Approach to Secure your ADFdi Spreadsheets

    - by juan.ruiz
    ADF desktop integration leverages ADF security to provide access to published spreadsheets within your application. In this article I discussed a good security practice for your existing as well as any new spreadsheets that you create. ADF Desktop integration uses the adfdiRemoteServlet to process and send request back and fort from and to the ADFmodel which is allocated in the Java EE container where our application is deployed. In other words this is one of the entry points to the application server. Having said that, we need to make sure that container-based security is provided to avoid vulnerabilities. So what is needed? For existing an new ADFdi applications you need to create a Security Constraint for the ADFdi servlet on the Web.xml file of our application. Fortunately JDeveloper 11g provides a nice visual editor to do this. Open the web.xml file and go to the security category Add a new Web Resource Collection give it a meaningful name and on the URL Pattern add /adfdiRemoteServlet click on the Authorization tab and make sure the valid-users  role is selected for authorization and Voila! your application now is more secured.

    Read the article

  • Gesture Detector not firing

    - by Tyler
    So I'm trying to create a input class that implements a InputHandler & GestureListener in order to support both Android & Desktop. The problem is that not all the methods are being called properly. Here is the input class definition & a couple of the methods: public class InputHandler implements GestureListener, InputProcessor{ ... public InputHandler(OrthographicCamera camera, Map m, Player play, Vector2 maxPos) { ... @Override public boolean zoom(float originalDistance, float currentDistance) { //this.zoom = true; this.zoomRatio = originalDistance / currentDistance; cam.zoom = cam.zoom * zoomRatio; Gdx.app.log("GestureDetector", "Zoom - ratio: " + zoomRatio); return true; } @Override public boolean touchDown(int x, int y, int pointerNum, int button) { booleanConditions[TOUCH_EVENT] = true; this.inputButton = button; this.inputFingerNum = pointerNum; this.lastTouchEventLoc.set(x,y); this.currentCursorPos.set(x,y); if(pointerNum == 1) { //this.fingerOne = true; this.fOnePosition.set(x, y); } else if(pointerNum == 2) { //this.fingerTwo = true; this.fTwoPosition.set(x,y); } Gdx.app.log("GestureDetector", "touch down at " + x + ", " + y + ", pointer: " + pointerNum); return true; } The touchDown event always occurs but I can never trigger Zoom (or pan among others...). The following is where I register and create the input handler in the "Game Screen". public class GameScreen implements Screen { ... this.inputHandler = new InputHandler(this.cam, this.map, this.player, this.map.maxCamPos); Gdx.input.setInputProcessor(this.inputHandler); Anyone have any ideas why zoom, pan, etc... are not triggering? Thanks!

    Read the article

  • Unable to cast transparent proxy to type &lt;type&gt;

    - by Rick Strahl
    This is not the first time I've run into this wonderful error while creating new AppDomains in .NET and then trying to load types and access them across App Domains. In almost all cases the problem I've run into with this error the problem comes from the two AppDomains involved loading different copies of the same type. Unless the types match exactly and come exactly from the same assembly the typecast will fail. The most common scenario is that the types are loaded from different assemblies - as unlikely as that sounds. An Example of Failure To give some context, I'm working on some old code in Html Help Builder that creates a new AppDomain in order to parse assembly information for documentation purposes. I create a new AppDomain in order to load up an assembly process it and then immediately unload it along with the AppDomain. The AppDomain allows for unloading that otherwise wouldn't be possible as well as isolating my code from the assembly that's being loaded. The process to accomplish this is fairly established and I use it for lots of applications that use add-in like functionality - basically anywhere where code needs to be isolated and have the ability to be unloaded. My pattern for this is: Create a new AppDomain Load a Factory Class into the AppDomain Use the Factory Class to load additional types from the remote domain Here's the relevant code from my TypeParserFactory that creates a domain and then loads a specific type - TypeParser - that is accessed cross-AppDomain in the parent domain:public class TypeParserFactory : System.MarshalByRefObject,IDisposable { …/// <summary> /// TypeParser Factory method that loads the TypeParser /// object into a new AppDomain so it can be unloaded. /// Creates AppDomain and creates type. /// </summary> /// <returns></returns> public TypeParser CreateTypeParser() { if (!CreateAppDomain(null)) return null; /// Create the instance inside of the new AppDomain /// Note: remote domain uses local EXE's AppBasePath!!! TypeParser parser = null; try { Assembly assembly = Assembly.GetExecutingAssembly(); string assemblyPath = Assembly.GetExecutingAssembly().Location; parser = (TypeParser) this.LocalAppDomain.CreateInstanceFrom(assemblyPath, typeof(TypeParser).FullName).Unwrap(); } catch (Exception ex) { this.ErrorMessage = ex.GetBaseException().Message; return null; } return parser; } private bool CreateAppDomain(string lcAppDomain) { if (lcAppDomain == null) lcAppDomain = "wwReflection" + Guid.NewGuid().ToString().GetHashCode().ToString("x"); AppDomainSetup setup = new AppDomainSetup(); // *** Point at current directory setup.ApplicationBase = AppDomain.CurrentDomain.BaseDirectory; //setup.PrivateBinPath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "bin"); this.LocalAppDomain = AppDomain.CreateDomain(lcAppDomain,null,setup); // Need a custom resolver so we can load assembly from non current path AppDomain.CurrentDomain.AssemblyResolve += new ResolveEventHandler(CurrentDomain_AssemblyResolve); return true; } …} Note that the classes must be either [Serializable] (by value) or inherit from MarshalByRefObject in order to be accessible remotely. Here I need to call methods on the remote object so all classes are MarshalByRefObject. The specific problem code is the loading up a new type which points at an assembly that visible both in the current domain and the remote domain and then instantiates a type from it. This is the code in question:Assembly assembly = Assembly.GetExecutingAssembly(); string assemblyPath = Assembly.GetExecutingAssembly().Location; parser = (TypeParser) this.LocalAppDomain.CreateInstanceFrom(assemblyPath, typeof(TypeParser).FullName).Unwrap(); The last line of code is what blows up with the Unable to cast transparent proxy to type <type> error. Without the cast the code actually returns a TransparentProxy instance, but the cast is what blows up. In other words I AM in fact getting a TypeParser instance back but it can't be cast to the TypeParser type that is loaded in the current AppDomain. Finding the Problem To see what's going on I tried using the .NET 4.0 dynamic type on the result and lo and behold it worked with dynamic - the value returned is actually a TypeParser instance: Assembly assembly = Assembly.GetExecutingAssembly(); string assemblyPath = Assembly.GetExecutingAssembly().Location; object objparser = this.LocalAppDomain.CreateInstanceFrom(assemblyPath, typeof(TypeParser).FullName).Unwrap(); // dynamic works dynamic dynParser = objparser; string info = dynParser.GetVersionInfo(); // method call works // casting fails parser = (TypeParser)objparser; So clearly a TypeParser type is coming back, but nevertheless it's not the right one. Hmmm… mysterious.Another couple of tries reveal the problem however:// works dynamic dynParser = objparser; string info = dynParser.GetVersionInfo(); // method call works // c:\wwapps\wwhelp\wwReflection20.dll (Current Execution Folder) string info3 = typeof(TypeParser).Assembly.CodeBase; // c:\program files\vfp9\wwReflection20.dll (my COM client EXE's folder) string info4 = dynParser.GetType().Assembly.CodeBase; // fails parser = (TypeParser)objparser; As you can see the second value is coming from a totally different assembly. Note that this is even though I EXPLICITLY SPECIFIED an assembly path to load the assembly from! Instead .NET decided to load the assembly from the original ApplicationBase folder. Ouch! How I actually tracked this down was a little more tedious: I added a method like this to both the factory and the instance types and then compared notes:public string GetVersionInfo() { return ".NET Version: " + Environment.Version.ToString() + "\r\n" + "wwReflection Assembly: " + typeof(TypeParserFactory).Assembly.CodeBase.Replace("file:///", "").Replace("/", "\\") + "\r\n" + "Assembly Cur Dir: " + Directory.GetCurrentDirectory() + "\r\n" + "ApplicationBase: " + AppDomain.CurrentDomain.SetupInformation.ApplicationBase + "\r\n" + "App Domain: " + AppDomain.CurrentDomain.FriendlyName + "\r\n"; } For the factory I got: .NET Version: 4.0.30319.239wwReflection Assembly: c:\wwapps\wwhelp\bin\wwreflection20.dllAssembly Cur Dir: c:\wwapps\wwhelpApplicationBase: C:\Programs\vfp9\App Domain: wwReflection534cfa1f For the instance type I got: .NET Version: 4.0.30319.239wwReflection Assembly: C:\\Programs\\vfp9\wwreflection20.dllAssembly Cur Dir: c:\\wwapps\\wwhelpApplicationBase: C:\\Programs\\vfp9\App Domain: wwDotNetBridge_56006605 which clearly shows the problem. You can see that both are loading from different appDomains but the each is loading the assembly from a different location. Probably a better solution yet (for ANY kind of assembly loading problem) is to use the .NET Fusion Log Viewer to trace assembly loads.The Fusion viewer will show a load trace for each assembly loaded and where it's looking to find it. Here's what the viewer looks like: The last trace above that I found for the second wwReflection20 load (the one that is wonky) looks like this:*** Assembly Binder Log Entry (1/13/2012 @ 3:06:49 AM) *** The operation was successful. Bind result: hr = 0x0. The operation completed successfully. Assembly manager loaded from: C:\Windows\Microsoft.NET\Framework\V4.0.30319\clr.dll Running under executable c:\programs\vfp9\vfp9.exe --- A detailed error log follows. === Pre-bind state information === LOG: User = Ras\ricks LOG: DisplayName = wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null (Fully-specified) LOG: Appbase = file:///C:/Programs/vfp9/ LOG: Initial PrivatePath = NULL LOG: Dynamic Base = NULL LOG: Cache Base = NULL LOG: AppName = vfp9.exe Calling assembly : (Unknown). === LOG: This bind starts in default load context. LOG: Using application configuration file: C:\Programs\vfp9\vfp9.exe.Config LOG: Using host configuration file: LOG: Using machine configuration file from C:\Windows\Microsoft.NET\Framework\V4.0.30319\config\machine.config. LOG: Policy not being applied to reference at this time (private, custom, partial, or location-based assembly bind). LOG: Attempting download of new URL file:///C:/Programs/vfp9/wwReflection20.DLL. LOG: Assembly download was successful. Attempting setup of file: C:\Programs\vfp9\wwReflection20.dll LOG: Entering run-from-source setup phase. LOG: Assembly Name is: wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null LOG: Binding succeeds. Returns assembly from C:\Programs\vfp9\wwReflection20.dll. LOG: Assembly is loaded in default load context. WRN: The same assembly was loaded into multiple contexts of an application domain: WRN: Context: Default | Domain ID: 2 | Assembly Name: wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null WRN: Context: LoadFrom | Domain ID: 2 | Assembly Name: wwReflection20, Version=4.61.0.0, Culture=neutral, PublicKeyToken=null WRN: This might lead to runtime failures. WRN: It is recommended to inspect your application on whether this is intentional or not. WRN: See whitepaper http://go.microsoft.com/fwlink/?LinkId=109270 for more information and common solutions to this issue. Notice that the fusion log clearly shows that the .NET loader makes no attempt to even load the assembly from the path I explicitly specified. Remember your Assembly Locations As mentioned earlier all failures I've seen like this ultimately resulted from different versions of the same type being available in the two AppDomains. At first sight that seems ridiculous - how could the types be different and why would you have multiple assemblies - but there are actually a number of scenarios where it's quite possible to have multiple copies of the same assembly floating around in multiple places. If you're hosting different environments (like hosting the Razor Engine, or ASP.NET Runtime for example) it's common to create a private BIN folder and it's important to make sure that there's no overlap of assemblies. In my case of Html Help Builder the problem started because I'm using COM interop to access the .NET assembly and the above code. COM Interop has very specific requirements on where assemblies can be found and because I was mucking around with the loader code today, I ended up moving assemblies around to a new location for explicit loading. The explicit load works in the main AppDomain, but failed in the remote domain as I showed. The solution here was simple enough: Delete the extraneous assembly which was left around by accident. Not a common problem, but one that when it bites is pretty nasty to figure out because it seems so unlikely that types wouldn't match. I know I've run into this a few times and writing this down hopefully will make me remember in the future rather than poking around again for an hour trying to debug the issue as I did today. Hopefully it'll save some of you some time as well in the future.© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  COM   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How to structure an application that combines WCF and WPF

    - by CiaranG
    I'm in the process of learning how to use WCF (Windows Communication Foundation) to allow a client/server desktop application to communicate. The application's UI will be implemented using WPF, and we will probably use SQL Server for our database. What I'm struggling with, is understanding how to structure such an application. From what I've read, there are three components of a WCF application (which in the examples I've seen have existed as separate projects): A WCF service A WCF service host A WCF service client My question then, is - should these projects solely implement the functionality of sending/receiving data from the client/server? Would it make better sense this way? Would it make sense to create a separate WPF (Windows Presentation Foundation) project to implement the UI for the application? And so, when I need to send/receive data from the client/server, I could simply invoke the operations provided in the WCF projects that I have created? For anyone who has built similar applications previously, perhaps you could explain what worked best for you in terms of structuring your application? For example, if I create a user registration page. When the user clicks the 'Register' button, the client application will need to send the data to the server. In this case, could I just invoke the methods provided in the WCF projects to send the data? Also, what data structures worked best for you when sending/receiving data? My initial thought is sending/receiving XML containing the data. Is this an option that is easy to implement? I realise that answers to this question may well be a matter of opinion - unless there are specific best practices that I'm not aware of. Thank you

    Read the article

< Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >