Search Results

Search found 23674 results on 947 pages for 'custom action'.

Page 296/947 | < Previous Page | 292 293 294 295 296 297 298 299 300 301 302 303  | Next Page >

  • Bone creation in XNA Content Pipeline

    - by cod3monk3y
    I'm trying to manually create a ModelContent instance that includes custom Bone data in a custom ContentProcessor in the XNA Content Pipeline. I can't seem to create or assign manually created bone data due to either private constructors or read-only collections (at every turn). The code I have right now that creates a single triangle ModelContent that I'd like to create a bone for is: MeshContent mc = new MeshContent(); mc.Positions.Add(new Vector3(-10, 0, 0)); mc.Positions.Add(new Vector3(0, 10, 0)); mc.Positions.Add(new Vector3(10, 0, 0)); GeometryContent gc = new GeometryContent(); gc.Indices.AddRange(new int[] { 0, 1, 2 }); gc.Vertices.AddRange(new int[] { 0, 1, 2 }); mc.Geometry.Add(gc); // Create normals MeshHelper.CalculateNormals(mc, true); // finally, convert it to a model ModelContent model = context.Convert<MeshContent, ModelContent>(mc, "ModelProcessor"); The documentation on XNA is amazingly sparse. I've been referencing the class diagrams created by DigitalRune and Sean Hargreaves blog, but I haven't found anything on creating bone content. Once the ModelContent is created, it's not possible to add bones because the Bones collection is read-only. And it seems the only way to create the ModelContent instance is to call the standard ModelProcessor via ContentProcessorContext.Convert. So it's a bit of a catch-22. The BoneContent class has a constructor but no methods except those inherited from NodeContent... though now (true to form) maybe I've realized the solution by asking the question. Should I create a root NodeContent with two children: one MeshContent and one BoneContent as the root of my skeleton; then pass the root NodeContent to ContentProcessorContext.Convert? Off to try that now...

    Read the article

  • ASP.NET MVC Controllers & Actions In Regards To URLs And SEO

    - by user1066133
    The general idea is that if I were to create an MVC site, simple pages such as the contact and about pages will be placed under the Home Controller. So my URL would look like http://www.mysite.com/home/contact, and http://www.mysite.com/home/about. The above works just fine, but I really don't like the idea of having the "home" portion in the URL. So what negatives would there be if I decided to make a controller name of Contact and About and just added a single Index action so that way the URL would be simplified to http://www.mysite.com/contact and http://www.mysite.com/about. This method looks cleaner. Do any of you do the same or something similar? I've been trying to work on SEO for an escort service site I've developed and when you search for the females the link looks like http://www.mysite.com/escorts/female-escorts, and like-wise for males. I'm wondering if I should remove the Escorts Controller and just create a Female_Escorts Controller with an Index Action only so it comes out like the above as http://www.mysite.com/female-escorts.

    Read the article

  • VSDB to SSDT part 3 : command-line deployment with SqlPackage.exe, replacement for Vsdbcmd.exe

    - by Etienne Giust
    For our continuous integration needs, we use a powershell script to handle deployment. A simpler approach would be to have a deployment task embedded within the build process. See the solution provided here by Jakob Ehn (a most interesting read which also dives into the '”deploying from Visual Studio” specifics) : http://geekswithblogs.net/jakob/archive/2012/04/25/deploying-ssdt-projects-with-tfs-build.aspx   For our needs, though, clearly separating our build phase from our deployment phase is important. It allows us to instantly deploy old versions. Also it is more convenient for continuous integration. So we stick with the powershell script approach. With VSDB projects, that script used to call the following command (the vsdbcmd executable was locally available, along with needed libraries): vsdbcmd.exe /a:Deploy /dd /cs:<CONNECTIONSTRING TO TARGET DB> /dsp:SQL /manifest:< PATH TO .deploymanifest FILE>   To be able to do the approximately same thing with a SSDT produced file (dacpac), you would call this command on a machine which has VS2012 installed (or the SSDT installed, see here : http://msdn.microsoft.com/en-us/library/hh500335%28v=vs.103%29):   C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\SqlPackage.exe /Action:Publish /SourceFile:<PATH TO Database.dacpac FILE> /Profile:<PATH TO .publish.xml FILE>   And from within a powershell script :   & "C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\SqlPackage.exe" /Action:Publish /SourceFile:<PATH TO Database.dacpac FILE> /Profile:<PATH TO .publish.xml FILE>   The command will consume a publish.xml file where the connection string and the deployment options are specified. You must be familiar with it if you have done some deployments from visual studio. If not, please refer to the above mentioned article by Jakob Ehn.   It is also possible to pass those parameters in the command line. The complete SqlPackage.exe syntax is detailed here : http://msdn.microsoft.com/en-us/library/hh550080%28v=vs.103%29.aspx

    Read the article

  • DC Comics Identifies Krypton on the Star Map

    - by Jason Fitzpatrick
    This week Action Comics Superman #14 hits the stands and DC comics reveals the actual location of Kyrpton, delivered by none other than beloved astrophysicist Neil Tyson. Phil Plait at Bad Astronomy reports on the resolution of fans’ long standing curiosity about the location of Krypton: Well, that’s about to change. DC comics is releasing a new book this week – Action Comics Superman #14 – that finally reveals the answer to this stellar question. And they picked a special guest to reveal it: my old friend Neil Tyson. Actually, Neil did more than just appear in the comic: he was approached by DC to find a good star to fit the story. Red supergiants don’t work; they explode as supernovae when they are too young to have an advanced civilization rise on any orbiting planets. Red giants aren’t a great fit either; they can be old, but none is at the right distance to match the storyline. It would have to be a red dwarf: there are lots of them, they can be very old, and some are close enough to fit the plot. I won’t keep you in suspense: the star is LHS 2520, a red dwarf in the southern constellation of Corvus (at the center of the picture here). It’s an M3.5 dwarf, meaning it has about a quarter of the Sun’s mass, a third its diameter, roughly half the Sun’s temperature, and a luminosity of a mere 1% of our Sun’s. It’s only 27 light years away – very close on the scale of the galaxy – but such a dim bulb you need a telescope to see it at all (for any astronomers out there, the coordinates are RA: 12h 10m 5.77s, Dec: -15° 4m 17.9 s). 6 Ways Windows 8 Is More Secure Than Windows 7 HTG Explains: Why It’s Good That Your Computer’s RAM Is Full 10 Awesome Improvements For Desktop Users in Windows 8

    Read the article

  • Suggestions for html tag info required for jQuery Plugin

    - by Toby Allen
    I have written a tiny bit of jQuery which simply selects all the Select form elements on the page and sets the selected property to the correct value. Previously I had to write code to generate the Select in php and specify the Selected attribute for the option that was selected or do loads of if statements in my php page or smarty template. Obviosly the information about what option is selected still needs to be specified somewhere in the page so the jQuery code can select it. I decided to create a new attribute on the Select item <Select name="MySelect" SelectedOption="2"> <-- Custom Attr SelectedOption <option value="1">My Option 1 </option> <option value="2">My Option 2 </option> <-- this option will be selected when jquery code runs <option value="3">My Option 3 </option> <option value="4">My Option 4 </option> </Select> Does anyone see a problem with using a custom attribute to do this, is there a more accepted way, or a better jquery way?

    Read the article

  • Manage SQL Server Connectivity through Windows Azure Virtual Machines Remote PowerShell

    - by SQLOS Team
    Manage SQL Server Connectivity through Windows Azure Virtual Machines Remote PowerShell Blog This blog post comes from Khalid Mouss, Senior Program Manager in Microsoft SQL Server. Overview The goal of this blog is to demonstrate how we can automate through PowerShell connecting multiple SQL Server deployments in Windows Azure Virtual Machines. We would configure TCP port that we would open (and close) though Windows firewall from a remote PowerShell session to the Virtual Machine (VM). This will demonstrate how to take the advantage of the remote PowerShell support in Windows Azure Virtual Machines to automate the steps required to connect SQL Server in the same cloud service and in different cloud services.  Scenario 1: VMs connected through the same Cloud Service 2 Virtual machines configured in the same cloud service. Both VMs running different SQL Server instances on them. Both VMs configured with remote PowerShell turned on to be able to run PS and other commands directly into them remotely in order to re-configure them to allow incoming SQL connections from a remote VM or on premise machine(s). Note: RDP (Remote Desktop Protocol) is kept configured in both VMs by default to be able to remote connect to them and check the connections to SQL instances for demo purposes only; but not actually required. Step 1 – Provision VMs and Configure Ports   Provision VM1; named DemoVM1 as follows (see examples screenshots below if using the portal):   Provision VM2 (DemoVM2) with PowerShell Remoting enabled and connected to DemoVM1 above (see examples screenshots below if using the portal): After provisioning of the 2 VMs above, here is the default port configurations for example: Step2 – Verify / Confirm the TCP port used by the database Engine By the default, the port will be configured to be 1433 – this can be changed to a different port number if desired.   1. RDP to each of the VMs created below – this will also ensure the VMs complete SysPrep(ing) and complete configuration 2. Go to SQL Server Configuration Manager -> SQL Server Network Configuration -> Protocols for <SQL instance> -> TCP/IP - > IP Addresses   3. Confirm the port number used by SQL Server Engine; in this case 1433 4. Update from Windows Authentication to Mixed mode   5.       Restart SQL Server service for the change to take effect 6.       Repeat steps 3., 4., and 5. For the second VM: DemoVM2 Step 3 – Remote Powershell to DemoVM1 Enter-PSSession -ComputerName condemo.cloudapp.net -Port 61503 -Credential <username> -UseSSL -SessionOption (New-PSSessionOption -SkipCACheck -SkipCNCheck) Your will then be prompted to enter the password. Step 4 – Open 1433 port in the Windows firewall netsh advfirewall firewall add rule name="DemoVM1Port" dir=in localport=1433 protocol=TCP action=allow Output: netsh advfirewall firewall show rule name=DemoVM1Port Rule Name:                            DemoVM1Port ---------------------------------------------------------------------- Enabled:                              Yes Direction:                            In Profiles:                             Domain,Private,Public Grouping:                             LocalIP:                              Any RemoteIP:                             Any Protocol:                             TCP LocalPort:                            1433 RemotePort:                           Any Edge traversal:                       No Action:                               Allow Ok. Step 5 – Now connect from DemoVM2 to DB instance in DemoVM1 Step 6 – Close port 1433 in the Windows firewall netsh advfirewall firewall delete rule name=DemoVM1Port Output: Deleted 1 rule(s). Ok. netsh advfirewall firewall show  rule name=DemoVM1Port No rules match the specified criteria.   Step 7 – Try to connect from DemoVM2 to DB Instance in DemoVM1  Because port 1433 has been closed (in step 6) in the Windows Firewall in VM1 machine, we can longer connect from VM3 remotely to VM1. Scenario 2: VMs provisioned in different Cloud Services 2 Virtual machines configured in different cloud services. Both VMs running different SQL Server instances on them. Both VMs configured with remote PowerShell turned on to be able to run PS and other commands directly into them remotely in order to re-configure them to allow incoming SQL connections from a remote VM or on on-premise machine(s). Note: RDP (Remote Desktop Protocol) is kept configured in both VMs by default to be able to remote connect to them and check the connections to SQL instances for demo purposes only; but not actually needed. Step 1 – Provision new VM3 Provision VM3; named DemoVM3 as follows (see examples screenshots below if using the portal): After provisioning is complete, here is the default port configurations: Step 2 – Add public port to VM1 connect to from VM3’s DB instance Since VM3 and VM1 are not connected in the same cloud service, we will need to specify the full DNS address while connecting between the machines which includes the public port. We shall add a public port 57000 in this case that is linked to private port 1433 which will be used later to connect to the DB instance. Step 3 – Remote Powershell to DemoVM1 Enter-PSSession -ComputerName condemo.cloudapp.net -Port 61503 -Credential <UserName> -UseSSL -SessionOption (New-PSSessionOption -SkipCACheck -SkipCNCheck) You will then be prompted to enter the password.   Step 4 – Open 1433 port in the Windows firewall netsh advfirewall firewall add rule name="DemoVM1Port" dir=in localport=1433 protocol=TCP action=allow Output: Ok. netsh advfirewall firewall show rule name=DemoVM1Port Rule Name:                            DemoVM1Port ---------------------------------------------------------------------- Enabled:                              Yes Direction:                            In Profiles:                             Domain,Private,Public Grouping:                             LocalIP:                              Any RemoteIP:                             Any Protocol:                             TCP LocalPort:                            1433 RemotePort:                           Any Edge traversal:                       No Action:                               Allow Ok.   Step 5 – Now connect from DemoVM3 to DB instance in DemoVM1 RDP into VM3, launch SSM and Connect to VM1’s DB instance as follows. You must specify the full server name using the DNS address and public port number configured above. Step 6 – Close port 1433 in the Windows firewall netsh advfirewall firewall delete rule name=DemoVM1Port   Output: Deleted 1 rule(s). Ok. netsh advfirewall firewall show  rule name=DemoVM1Port No rules match the specified criteria.  Step 7 – Try to connect from DemoVM2 to DB Instance in DemoVM1  Because port 1433 has been closed (in step 6) in the Windows Firewall in VM1 machine, we can no longer connect from VM3 remotely to VM1. Conclusion Through the new support for remote PowerShell in Windows Azure Virtual Machines, one can script and automate many Virtual Machine and SQL management tasks. In this blog, we have demonstrated, how to start a remote PowerShell session, re-configure Virtual Machine firewall to allow (or disallow) SQL Server connections. References SQL Server in Windows Azure Virtual Machines   Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Change Password vs. Reset Password-Week 42

    - by OWScott
    You can find this week’s video here. The differences between change password and reset password are not well known. This week's video walks through the differences and shows them in action. Tune in to find out more about password management. It wasn’t until fairly recently that I realized that there is a difference between a change password and a reset password. One is safe, while the other not so much. I remember when Windows Server 2003 was first released and resetting a user’s password had a distinct warning about irreversible loss of information. I wondered why it wasn’t mentioned in previous operating systems, but I also wondered if it was true since I never personally noticed any impact. It wasn’t until about a year ago when I really dug in to understand this topic better. This week’s lesson covers the differences between a change password and a reset password. In this video we also take a look at it in action so that we have a solid understanding of the topic, and briefly discuss how it works for programming APIs too. This is now week 42 of a 52 week series for the web pro. You can view past and future weeks here: http://dotnetslackers.com/projects/LearnIIS7/ You can find this week’s video here.

    Read the article

  • How to display workflow related tasks in the item display page where the workflow is currently running on in SharePoint2013

    - by ybbest
    In one of the project, I need to display workflow related tasks in the item display page where the workflow is currently running on. To achieve this, I’d like to add the tasks list view web part and using the connected web part to achieve this.(ID=workflowitemid) However, to make it work I need to unhide the workflowitemid field in the task list, as it is hidden field and also cantogglehidden field is set to false. I need to use reflection to change the cantogglehidden field to true as it only has getter in the API and then I am able to unhide the field. You can download the script here. However, it is not ideal (make your environment not supported by Microsoft) to display tasks this way. Another way to display the related task is to use SharePoint designer solution with List view web part and data source. Here are the steps. 1. Create a new list display form as below 2. Edit the custom display form in advanced mode. 3. Find the PlaceHolderMain contentplace hoder and insert the DataView by choosing the associated workflow tasks list as below 4. Go to the List View Tools >> OPTIONS 5. Create a Parameter called workflowitemId Parameter which retrieve the value from the ID querystring as below 6. Create a filter based on UIVersion = workflowitemId as below ,we are going to change the UIVersion to WorkflowItemId property later as WorkflowItemId is a hidden field and cannot be selected from the wizard. 7. Replace UIVersion with WorkflowItemId in the caml for the XsltListViewWebPart. From: TO 8. Go to the new custom display page at http://yourserver/Lists/aa/CustomDisplayPage.aspx?ID=414, you will see the associated tasks are showing in the page. References: http://office.microsoft.com/en-us/sharepoint-designer-help/watch-this-design-a-document-review-workflow-solution-HA010256417.aspx (Video 12 and 13)

    Read the article

  • SQL SERVER – Get Free Books on While Learning SQL Server 2012 Error Handling

    - by pinaldave
    Fans of this blog are aware that I have recently released my new books SQL Server Functions and SQL Server 2012 Queries. The books are available in market in limited edition but you can avail them for free on Wednesday Nov 14, 2012. Not only they are free but you can additionally learn SQL Server 2012 Error Handling as well. My book’s co-author Rick Morelan is presenting a webinar tomorrow on SQL Server 2012 Error Handling. Here is the brief abstract of the webinar: People are often shocked when they see the demo in this talk where the first statement fails and all other statements still commit. For example, did you know that BEGIN TRAN…COMMIT TRAN is not enough to make everything work together? These mistakes can still happen to you in SQL Server 2012 if you are not aware of the options. Rick Morelan, creator of Joes2Pros, will teach you how to predict the Error Action and control it with & without structured error handling. Register for the webinar now to learn: How to predict the Error Action and control it Nuances between successful and failing SQL statements Essential SQL Server 2012 configuration options Register for the Webinar and be present during the webinar. My co-author will announce a winner (may be more than 1 winner) during the session. If you are present during the session – you are eligible to win the book. The webinar is scheduled for 2 different times to accommodate various time zones. 1) 10am ET/7am PT 2) 1pm ET/11am PT. Each webinar will have their own winner. You can increase your chances by attending both the webinars. Do not miss this opportunity and register for the webinar right now. The recordings of the webinar may not be available. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • How to Best Optimize up Model Transforms, Import 3DS Animations Into XNA 4.0?

    - by Jason R. Mick
    Relative beginner to XNA, but trying to build a multi-purpose (3D) game frameworking in XNA 4. Been using the Reed (O'Reilly) and Cawood/McGee (McGraw Hill) guides. My question is multi-faceted and involves how to most efficiently handle models. I'm using 3DS Max 2010 with kw-Xport to ship out my models as .X files. Solved an early problem by using my depth stencil state. My models are now loading properly (yay!) and I have basic bounding working, I just want to optimize transforming models and get animations working as a next step. My questions on models are: 1. Do you have any suggestions for good resources on exporting 3DS animations to XNA? I've seen some resources on how to handle animations in XNA, but most skimp on basic topics of how to convert multi-animation 3DS files. For example how do I take one big long string of keyframed animations (say running, frame 5-20, climbing frames 25-45, etc.) and turned them into named XNA animations. To my understanding every XNA animation has to have a name, but I haven't seen any tutorials on creating a new named animation from a subset of frames. 2. Is it faster to load a model once and animate/transform that base model on the fly @ draw time, or to load multiple models? My game will have multiple enemies, and I've already seen some lagginess in XNA, so II want to make my code efficient... 3. I've heard people on app hub talking about making custom content processors for models-- what is the benefit of this? Does it speed up transforming or animating the models? If so, can you point me towards any good (model-centric) tutorials? (I've built a custom height map content processor to generate terrain, following Cawood's examples, I'm just a bit confused as to how a model content processor would be implemented.)

    Read the article

  • How do I animate a spritesheet in cocos2d Android

    - by user729053
    I'm trying to animate a sprite in a spritesheet, the goal is to make the character walk from left to right. I subclassed CCSprite: CharacterSprite. this is my code: CCSpriteFrameCache.sharedSpriteFrameCache().addSpriteFrames("WalkAnimation.plist"); CCSpriteSheet spriteSheet = CCSpriteSheet.spriteSheet("WalkAnimation.png"); w.addChild(spriteSheet); ArrayList<CCSpriteFrame> walkSprites = new ArrayList<CCSpriteFrame>(); for(int i = 1; i<=8;++i) { walkSprites.add(CCSpriteFrameCache.sharedSpriteFrameCache().spriteFrameByName("walk"+ i +".png")); } //float randomFactor = (float)Math.random()*10; CCAnimation walkAnimation = CCAnimation.animation("walk", 0.1f, walkSprites); Log.v("addEnemy", "Show"); this.character = CCSprite.sprite(walkSprites.get(0)); Log.v("addEnemy", "Don't Show"); this.walkAction = CCRepeatForever.action(CCAnimate.action(walkAnimation, false)); character.runAction(walkAction); for(int k =0; k<amount; k++) { float randomFactor = (float)Math.random()*10; character.setPosition(CGPoint.ccp(-randomFactor * character.getContentSize().width/2, winSize.height / 6.0f)); spriteSheet.addChild(character); } This code is in my 2nd scene. And whenever I press on Start Game, nothing happens. But the code is executed till: character = CharacterSprite.sprite("walk1.png"); Can someone provide me of a snippet of animation using a spritesheet?

    Read the article

  • Debugging .NET 2.0 assembly from unmanaged code in VS2010?

    - by Rick Strahl
    I’ve run into a serious snag trying to debug a .NET 2.0 assembly that is called from unmanaged code in Visual Studio 2010. I maintain a host of components that using COM interop and custom .NET runtime hosting and ever since installing Visual Studio 2010 I’ve been utterly blocked by VS 2010’s inability to apparently debug .NET 2.0 assemblies when launching through unmanaged code. Here’s what I’m actually doing (simplified scenario to demonstrate): I have a .NET 2.0 assembly that is compiled for COM Interop Compile project with .NET 2.0 target and register for COM Interop Set a breakpoint in the .NET component in one of the class methods Instantiate the .NET component via COM interop and call method The result is that the COM call works fine but the debugger never triggers on the breakpoint. If I now take that same assembly and target it at .NET 4.0 without any other changes everything works as expected – the breakpoint set in the assembly project triggers just fine. The easy answer to this problem seems to be “Just switch to .NET 4.0” but unfortunately the application and the way the runtime is actually hosted has a few complications. Specifically the runtime hosting uses .NET 2.0 hosting and apparently the only reliable way to host the .NET 4.0 runtime is to use the new hosting APIs that are provided only with .NET 4.0 (which all by itself is lame, lame, lame as once again the promise of backwards compatibility is broken once again by .NET). So for the moment I need to continue using the .NET 2.0 hosting APIs due to application requirements. I’ve been searching high and low and experimenting back and forth, posted a few questions on the MSDN forums but haven’t gotten any hints on what might be causing the apparent failure of Visual Studio 2010 to debug my .NET 2.0 assembly properly when called from un-managed code. Incidentally debugging .NET 2.0 targeted assemblies works fine when running with a managed startup application – it seems the issue is specific to the unmanaged code starting up. My particular issue is with custom runtime hosting which at first I thought was the problem. But the same issue manifests when using COM Interop against a .NET 2.0 assembly, so the hosting is probably not the issue. Curious if anybody has any ideas on what could be causing the lack of debugging in this scenario?© Rick Strahl, West Wind Technologies, 2005-2010

    Read the article

  • Developing a long pannable, sprite-animated Windows Store app

    - by Groo
    I am creating my first Windows Store app in XAML, and I cannot seem to find a proper example for the requirements I have (I have spent a couple of days fiddling around, so I apologize if I missed something obvious). Basic idea of the app is to have a large scrollable canvas which would lazily start animating visible parts of the view as soon as user stops panning over a certain content (with some audio played also): My original idea was to use a StackPanel to add a bunch of custom controls, each of which would then animate itself once visible (with a short delay), but I have a couple of concerns: If the entire canvas is ~50 screen widths wide, is it feasible to load all content at the beginning, or do I need to plan doing some lazy loading during scrolling? For example, when I select a certain region in the Bing Travel app, it seems to lazily load tiles as I scroll it towards the end. Since content is stretched 100% vertically, and these animations are vectorized to be resolution independent, I am not sure if XAML (CompositionTarget) will be able to handle this, or I have to go for DirectX (MonoGame or C++) to get rid of flicker. Even better, is there an example for Windows 8 which uses a 100% vertically sized GridView with custom animated controls inside?

    Read the article

  • GWB | 30 in 60 Update &ndash; Enrique is almost there!

    - by Staff of Geeks
    We are very close to having our first blogger to reach 30 posts, Enrique Lima.  Stuart Brierley is over the hump with 16 posts and Dave Campbell and Eric Nelson are definitely in the running.  If you don’t know what I am talking about, we are running a contest for our bloggers.  Anyone who blogs on Geekswithblogs who creates 30 posts from May 15th to July 13th will receive a custom Geekswithblogs.net t-shirt with their URL on the back.  This could be their Geekswithblogs.net address or their custom domain.  It is definitely not too late to get started and with TechEd or WWDC right around the corner, there is definitely a lot to talk about. Current Standings: Enrique Lima (28 posts) - http://geekswithblogs.net/enriquelima StuartBrierley (16 posts) - http://geekswithblogs.net/StuartBrierley Dave Campbell (12 posts) - http://geekswithblogs.net/WynApseTechnicalMusings Eric Nelson (10 posts) - http://geekswithblogs.net/iupdateable Christopher House (10 posts) - http://geekswithblogs.net/13DaysaWeek mbcrump (7 posts) - http://geekswithblogs.net/mbcrump Chris Williams (6 posts) - http://geekswithblogs.net/cwilliams Michael Stephenson (5 posts) - http://geekswithblogs.net/michaelstephenson Steve Michelotti (5 posts) - http://geekswithblogs.net/michelotti Liam McLennan (5 posts) - http://geekswithblogs.net/liammclennan Follow Us On Twitter: @StaffOfGeeks Technorati Tags: Geekswithblogs,30 in 60,Standings

    Read the article

  • NetBeans Sources as a Platform

    - by Geertjan
    By default, when you create a new NetBeans module, the 'development IDE' is the platform to which you'll be deploying your module. It's a good idea to create your own platform and check that into your own repo, so that everyone working on your project will be able to work with a standardized platform, rather than whatever happens to be beneath the development IDE your using. Something else you can do is use the NetBeans sources as your platform, once you've checked them out. That's something I did the other day when trying to see whether adding 'setActivatedNodes' to NbSheet was sufficient for getting UndoRedo enabled in the Properties Window. So that's a good use case, i.e., you'd like to change the NetBeans Platform somehow, or you're fixing a bug, in other words, in some way you need to change the NetBeans Platform sources and then would like to try out the result of your changes as a client of your changes. In that scenario, here's how to set up and use a NetBeans Platform from the NetBeans sources. Run 'ant build platform' on the root of the NetBeans sources. You'll end up with nbbuild/netbeans containing a subfolder 'platform' and a subfolder 'harness'. There's your NetBeans Platform. Go to Tools | NetBeans Platform and browse to nbbuild/netbeans', registering it as your NetBeans Platform. Create a new NetBeans module, using the new NetBeans Platform as the platform. Now the cool thing is you can open any of the NetBeans modules from the NetBeans Platform modules in the NetBeans sources. When you change the source code of one of these modules and then build that module, the changed JAR will automatically be added to the right place in the nbbuild/netbeans folder. And when you do a 'clean' on a NetBeans Platform module, the related JAR will be removed from nbbuild/netbeans. In other words, in this way, by changing the NetBeans sources, you're directly changing the platform that your custom module will be running on when you deploy it. That's pretty cool and gives you a more connected relationship to your platform, since you're able to change it in the same way as the custom modules you create.

    Read the article

  • Explain DNS/Content/Registration with services such as Blogger and Go Daddy

    - by user8926
    I have this kind of settings for Google Sites and Blogger in Godaddy, below. I cannot get URL Framing (not URL masking) working with them. I am unsure what the problem, cannot understand what services such as Blogger and Godaddy really do. Wrong A-records in Go Daddy! ; A Records @ 3600 IN A 216.239.32.21 art 3600 IN A 64.202.189.170 abc 3600 IN A 64.202.189.170 @ 3600 IN A 216.239.34.21 @ 3600 IN A 216.239.36.21 @ 3600 IN A 216.239.38.21 lol 3600 IN A 64.202.189.170 ; CNAME Records www 3600 IN CNAME ghs.google.com mobilemail 3600 IN CNAME mobilemail-v01.prod.mesa1.secureserver.net pda 3600 IN CNAME mobilemail-v01.prod.mesa1.secureserver.net email 3600 IN CNAME email.secureserver.net imap 3600 IN CNAME imap.secureserver.net mail 3600 IN CNAME pop.secureserver.net pop 3600 IN CNAME pop.secureserver.net smtp 3600 IN CNAME smtp.secureserver.net ftp 3600 IN CNAME @ webmail 3600 IN CNAME webmail.secureserver.net e 3600 IN CNAME email.secureserver.net Please, explain the "Custom Domain" and how can I hide my blogger url? ok I am still unsure what the "custom domain" in blogger really mean, does it mean that the content hosting is moved to some other site? Or does it mean that it just hides the blogspot url with other url? Or is it this so-called "301" thing or "URL redirection" or something else? Related questions Control Content Hosting, DNS Hosting and Registration with command line?

    Read the article

  • In Joomla In htaccess REQUEST_URI is always returning index.php instead of SEF URL

    - by Saumil
    I have installed joomsef version 3.9.9 with the Joomla 1.5.25. Now I want to set https for some of the section of my site(e.g URI starts with /events/) while wanting rest of all urls on http.I am setting rules in .htaccess file but not getting output as expected. I am checking REQUEST_URI of the SEF urls but always getting index.php as URI. Here is my htaccess code. ########## Begin - Custom redirects # # If you need to redirect some pages, or set a canonical non-www to # www redirect (or vice versa), place that code here. Ensure those # redirects use the correct RewriteRule syntax and the [R=301,L] flags. # ########## End - Custom redirects # Uncomment following line if your webserver's URL # is not directly related to physical file paths. # Update Your Joomla! Directory (just / for root) # RewriteBase / ########## Begin - Joomla! core SEF Section # RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}] # # If the requested path and file is not /index.php and the request # has not already been internally rewritten to the index.php script RewriteCond %{REQUEST_URI} !^/index\.php # and the request is for root, or for an extensionless URL, or the # requested URL ends with one of the listed extensions RewriteCond %{REQUEST_URI} (/[^.]*|\.(php|html?|feed|pdf|raw))$ [NC] # and the requested path and file doesn't directly match a physical file RewriteCond %{REQUEST_FILENAME} !-f # and the requested path and file doesn't directly match a physical folder RewriteCond %{REQUEST_FILENAME} !-d # internally rewrite the request to the index.php script RewriteRule .* index.php [L] # ########## End - Joomla! core SEF Section # Here is my code e.g site url is http://mydomain.com/events RewriteCond %{REQUEST_URI} ^/(events)$ RewriteCond %{HTTPS} !ON RewriteRule (.*) https://%{REQUEST_HOST}%{REQUEST_URI}/$1 [L,R=301] I am not getting why REQUEST_URI is reffering index.php even though my url in address bar is like this http://mydomain.com/events . I am using JOOMSEF(Joomla extension for SEF URLS).If I am removing other rules from the htaccess file then joomla stops working. I am not getting a way to handle this as I am not expert.Please let me know if someone has passed through same situation and have solution or suggest some work around. Thanks

    Read the article

  • Microsoft Sync Framework

    - by kaleidoscope
    Introduction It is a platform that enables collaboration and offline access for applications, services and devices. Sync framework features technologies and tools that enable roaming, data sharing and taking data offline. Moreover, developers can build synchronization ecosystems that integrate any application with data from any store, by using any protocol over any network. Highlights * Add sync support to new and existing applications, services and devices * Enable collaboration and offline capabilities for any application * Roam and share information form any data store, over any protocol and over any network configuration * Leverage sync capabilities exposed in Microsoft technologies to create sync ecosystems * Extend the architecture to support custom data types including files Benefits of using Sync Framework * An extensible model that lets you integrate multiple data sources into a synchronization ecosystems. * A managed API for all components and a native API for select components. * Conflict handling for automatic and custom resolution schemes. * Filters that let you synchronize a subset of data, such as only those files that contain images. * A compact and efficient metadata model that enables synchronization for virtually any participant, without significant changes to the data store: - Any data store     Add synchronization to a wide range of applications, services and devices. - Any data type     Introduce new data types to synchronize. - Any protocol     Use existing architectures and protocols to synchronize data. The transport – agnostic architecture allows integration of synchronization into a variety of protocols, including over-the-air and embedded devices. - Any network configuration     Enable synchronization for your applications, devices and services in true peer-to-peer or hub-and-spoke configurations. Easily recover from network interruptions. Reduce network traffic by efficiently selecting changes to synchronize. Technorati Tags: Anish Sharma,Microsoft Sync Framework

    Read the article

  • BizTalk Pipeline Component Error: "Object reference not set to an instance of an object"

    - by Stuart Brierley
    Yesterday I posted about my BizTalk Archiving Pipeline Component, which can be found on Codeplex if anyone is interested in taking a look. During testing of this component I began to encounter an error whereby the component would throw an "Object reference not set to an instance of an object" error when processing as a part of a Custom Pipeline. This was occurring when the component was reading a ReadOnlySeekableStream so that the data can be archived to file, but the actual code throwing the error was somewhere in the depths of the Microsoft.BizTalk.Streaming stack. It turns out that there is a known issue where this exception can be thrown because the garbage collector has disposed of of the stream before execution of the custom pipeline has completed. To get around this you need to add the streams in your code to the pipeline context resource tracker.   So a block of my code goes from:                         originalStrm = bodyPart.GetOriginalDataStream();                         if (!originalStrm.CanSeek)                         {                             ReadOnlySeekableStream seekableStream = new ReadOnlySeekableStream(originalStrm);                             inmsg.BodyPart.Data = seekableStream;                             originalStrm = inmsg.BodyPart.Data;                         }                         fileArchive = new FileStream(FullPath, FileMode.Create, FileAccess.Write);                         binWriter = new BinaryWriter(fileArchive);                         byte[] buffer = new byte[bufferSize];                         int sizeRead = 0;                         while ((sizeRead = originalStrm.Read(buffer, 0, bufferSize)) != 0)                         {                             binWriter.Write(buffer, 0, sizeRead);                         } to                         originalStrm = bodyPart.GetOriginalDataStream();                         if (!originalStrm.CanSeek)                         {                             ReadOnlySeekableStream seekableStream = new ReadOnlySeekableStream(originalStrm);                             inmsg.BodyPart.Data = seekableStream;                             originalStrm = inmsg.BodyPart.Data;                         }                         pc.ResourceTracker.AddResource(originalStrm);                         fileArchive = new FileStream(FullPath, FileMode.Create, FileAccess.Write);                         binWriter = new BinaryWriter(fileArchive);                         byte[] buffer = new byte[bufferSize];                         int sizeRead = 0;                         while ((sizeRead = originalStrm.Read(buffer, 0, bufferSize)) != 0)                         {                             binWriter.Write(buffer, 0, sizeRead);                         } So far this seems to have solved the issue, the error is no more, and my archive component is continuing its way through testing.

    Read the article

  • Getting Started with TypeScript – Classes, Static Types and Interfaces

    - by dwahlin
    I had the opportunity to speak on different JavaScript topics at DevConnections in Las Vegas this fall and heard a lot of interesting comments about JavaScript as I talked with people. The most frequent comment I heard from people was, “I guess it’s time to start learning JavaScript”. Yep – if you don’t already know JavaScript then it’s time to learn it. As HTML5 becomes more and more popular the amount of JavaScript code written will definitely increase. After all, many of the HTML5 features available in browsers have little to do with “tags” and more to do with JavaScript (web workers, web sockets, canvas, local storage, etc.). As the amount of JavaScript code being used in applications increases, it’s more important than ever to structure the code in a way that’s maintainable and easy to debug. While JavaScript patterns can certainly be used (check out my previous posts on the subject or my course on Pluralsight.com), several alternatives have come onto the scene such as CoffeeScript, Dart and TypeScript. In this post I’ll describe some of the features TypeScript offers and the benefits that they can potentially offer enterprise-scale JavaScript applications. It’s important to note that while TypeScript has several great features, it’s definitely not for everyone or every project especially given how new it is. The goal of this post isn’t to convince you to use TypeScript instead of standard JavaScript….I’m a big fan of JavaScript. Instead, I’ll present several TypeScript features and let you make the decision as to whether TypeScript is a good fit for your applications. TypeScript Overview Here’s the official definition of TypeScript from the http://typescriptlang.org site: “TypeScript is a language for application-scale JavaScript development. TypeScript is a typed superset of JavaScript that compiles to plain JavaScript. Any browser. Any host. Any OS. Open Source.” TypeScript was created by Anders Hejlsberg (the creator of the C# language) and his team at Microsoft. To sum it up, TypeScript is a new language that can be compiled to JavaScript much like alternatives such as CoffeeScript or Dart. It isn’t a stand-alone language that’s completely separate from JavaScript’s roots though. It’s a superset of JavaScript which means that standard JavaScript code can be placed in a TypeScript file (a file with a .ts extension) and used directly. That’s a very important point/feature of the language since it means you can use existing code and frameworks with TypeScript without having to do major code conversions to make it all work. Once a TypeScript file is saved it can be compiled to JavaScript using TypeScript’s tsc.exe compiler tool or by using a variety of editors/tools. TypeScript offers several key features. First, it provides built-in type support meaning that you define variables and function parameters as being “string”, “number”, “bool”, and more to avoid incorrect types being assigned to variables or passed to functions. Second, TypeScript provides a way to write modular code by directly supporting class and module definitions and it even provides support for custom interfaces that can be used to drive consistency. Finally, TypeScript integrates with several different tools such as Visual Studio, Sublime Text, Emacs, and Vi to provide syntax highlighting, code help, build support, and more depending on the editor. Find out more about editor support at http://www.typescriptlang.org/#Download. TypeScript can also be used with existing JavaScript frameworks such as Node.js, jQuery, and others and even catch type issues and provide enhanced code help. Special “declaration” files that have a d.ts extension are available for Node.js, jQuery, and other libraries out-of-the-box. Visit http://typescript.codeplex.com/SourceControl/changeset/view/fe3bc0bfce1f#samples%2fjquery%2fjquery.d.ts for an example of a jQuery TypeScript declaration file that can be used with tools such as Visual Studio 2012 to provide additional code help and ensure that a string isn’t passed to a parameter that expects a number. Although declaration files certainly aren’t required, TypeScript’s support for declaration files makes it easier to catch issues upfront while working with existing libraries such as jQuery. In the future I expect TypeScript declaration files will be released for different HTML5 APIs such as canvas, local storage, and others as well as some of the more popular JavaScript libraries and frameworks. Getting Started with TypeScript To get started learning TypeScript visit the TypeScript Playground available at http://www.typescriptlang.org. Using the playground editor you can experiment with TypeScript code, get code help as you type, and see the JavaScript that TypeScript generates once it’s compiled. Here’s an example of the TypeScript playground in action:   One of the first things that may stand out to you about the code shown above is that classes can be defined in TypeScript. This makes it easy to group related variables and functions into a container which helps tremendously with re-use and maintainability especially in enterprise-scale JavaScript applications. While you can certainly simulate classes using JavaScript patterns (note that ECMAScript 6 will support classes directly), TypeScript makes it quite easy especially if you come from an object-oriented programming background. An example of the Greeter class shown in the TypeScript Playground is shown next: class Greeter { greeting: string; constructor (message: string) { this.greeting = message; } greet() { return "Hello, " + this.greeting; } } Looking through the code you’ll notice that static types can be defined on variables and parameters such as greeting: string, that constructors can be defined, and that functions can be defined such as greet(). The ability to define static types is a key feature of TypeScript (and where its name comes from) that can help identify bugs upfront before even running the code. Many types are supported including primitive types like string, number, bool, undefined, and null as well as object literals and more complex types such as HTMLInputElement (for an <input> tag). Custom types can be defined as well. The JavaScript output by compiling the TypeScript Greeter class (using an editor like Visual Studio, Sublime Text, or the tsc.exe compiler) is shown next: var Greeter = (function () { function Greeter(message) { this.greeting = message; } Greeter.prototype.greet = function () { return "Hello, " + this.greeting; }; return Greeter; })(); Notice that the code is using JavaScript prototyping and closures to simulate a Greeter class in JavaScript. The body of the code is wrapped with a self-invoking function to take the variables and functions out of the global JavaScript scope. This is important feature that helps avoid naming collisions between variables and functions. In cases where you’d like to wrap a class in a naming container (similar to a namespace in C# or a package in Java) you can use TypeScript’s module keyword. The following code shows an example of wrapping an AcmeCorp module around the Greeter class. In order to create a new instance of Greeter the module name must now be used. This can help avoid naming collisions that may occur with the Greeter class.   module AcmeCorp { export class Greeter { greeting: string; constructor (message: string) { this.greeting = message; } greet() { return "Hello, " + this.greeting; } } } var greeter = new AcmeCorp.Greeter("world"); In addition to being able to define custom classes and modules in TypeScript, you can also take advantage of inheritance by using TypeScript’s extends keyword. The following code shows an example of using inheritance to define two report objects:   class Report { name: string; constructor (name: string) { this.name = name; } print() { alert("Report: " + this.name); } } class FinanceReport extends Report { constructor (name: string) { super(name); } print() { alert("Finance Report: " + this.name); } getLineItems() { alert("5 line items"); } } var report = new FinanceReport("Month's Sales"); report.print(); report.getLineItems();   In this example a base Report class is defined that has a variable (name), a constructor that accepts a name parameter of type string, and a function named print(). The FinanceReport class inherits from Report by using TypeScript’s extends keyword. As a result, it automatically has access to the print() function in the base class. In this example the FinanceReport overrides the base class’s print() method and adds its own. The FinanceReport class also forwards the name value it receives in the constructor to the base class using the super() call. TypeScript also supports the creation of custom interfaces when you need to provide consistency across a set of objects. The following code shows an example of an interface named Thing (from the TypeScript samples) and a class named Plane that implements the interface to drive consistency across the app. Notice that the Plane class includes intersect and normal as a result of implementing the interface.   interface Thing { intersect: (ray: Ray) => Intersection; normal: (pos: Vector) => Vector; surface: Surface; } class Plane implements Thing { normal: (pos: Vector) =>Vector; intersect: (ray: Ray) =>Intersection; constructor (norm: Vector, offset: number, public surface: Surface) { this.normal = function (pos: Vector) { return norm; } this.intersect = function (ray: Ray): Intersection { var denom = Vector.dot(norm, ray.dir); if (denom > 0) { return null; } else { var dist = (Vector.dot(norm, ray.start) + offset) / (-denom); return { thing: this, ray: ray, dist: dist }; } } } }   At first glance it doesn’t appear that the surface member is implemented in Plane but it’s actually included automatically due to the public surface: Surface parameter in the constructor. Adding public varName: Type to a constructor automatically adds a typed variable into the class without having to explicitly write the code as with normal and intersect. TypeScript has additional language features but defining static types and creating classes, modules, and interfaces are some of the key features it offers. So is TypeScript right for you and your applications? That’s a not a question that I or anyone else can answer for you. You’ll need to give it a spin to see what you think. In future posts I’ll discuss additional details about TypeScript and how it can be used with enterprise-scale JavaScript applications. In the meantime, I’m in the process of working with John Papa on a new Typescript course for Pluralsight that we hope to have out in December of 2012.

    Read the article

  • Record management system java web framework

    - by Kamil Tomšík
    We're currently reconsidering technologies and frameworks to get more agile with "simple" RMS CRUD-based projects. In short, short-living things like this Right now we have a custom extension on top of SmartGWT but after some time it has proven not to be flexible enough. I also personally dislike the java-js compilation process and the whole GWT codebase. Not only is the design ugly, it also makes certain low-level js things very complicated if not completely impossible. So what I'm looking for is: closest to web as possible, like JSF or possibly Tapestry, it is very important to be able get "low" and weave framework if necessary. Happens more often than we thought. datagrid capable - Ext.js & PrimeFaces looks pretty good, Vaadin does too. db-schema generators (optional, no matter in which way) If it were only on me, I'd probably stick to Ext.js + custom rest-based java solution, possibly generated from database schema (not sure about concrete tooling yet). I only have experience with vanilla Ext.js, vanilla GWT and JSF 2.0 / Seam, so it hard for me to judge or even propose other frameworks. What would be your proposition? What are the problems you've faced? What was your solution and how hard do you think it was to deal with them in "big picture"?

    Read the article

  • CodePlex Daily Summary for Friday, August 01, 2014

    CodePlex Daily Summary for Friday, August 01, 2014Popular ReleasesQuickMon: Version 3.20: Added a 'Directory Services Query' collector agent. This collector allows for querying Active Directory using simple DirectorySearcher queries.Grunndatakvalitet: Initial working: Show Altinn metadata in Excel. To get a live list you need to run the sql script on a server and update the connection string in ExcelMultiple Threads TCP Server: Project: this Project is based on VS 2013, .net freamwork 4.0, you can open it by vs 2010 or laterEssence#: Philemon-2 (Alpha Build 18): The Philemon-2 release fixes bugs, most of which have to do with the binding of .Net assemblies. Configuration profiles have been added in order to help solve some of the .Net assembly binding issues. A configuration profile is persistently stored as a folder in the filesystem. It must be located in the folder %EssenceSharpPath%\Config. The folder name must use ".profile" as its file extension. A configuration profile folder may (optionally) contain one or more files--each with its own form...DataBind: DataBind 1.0: - Efficiency ImprovedConsole Progress Bar: ConsoleProgressBar-0.3: - Validate ProgressInPercentageSpace Engineers Server Manager: SESM V1.10: V1.10 - New auto update system ! You won't miss any SE Update now ^^ To learn more : https://sesm.codeplex.com/wikipage?title=Activating%20the%20auto%20update%20system - Moved the serve-wide configuration options to an other file than web.config. It should solve the "my parametters are errased by each SESM update" ! Warning ! If you upgrade from any previous version, you will need to copy-pase your parameters from your old web.config into the new SESM.config (this file is generated by acc...Accesorios de sitios Torrent en Español para Synology Download Station: Pack de Torrents en Español 6.0.0: Agregado los módulos de DivXTotal, el módulo de búsqueda depende del de alojamiento para bajar las series Utiliza el rss: http://www.divxtotal.com/rss.php DbEntry.Net (Leafing Framework): DbEntry.Net 4.2: DbEntry.Net is a lightweight Object Relational Mapping (ORM) database access compnent for .Net 4.0+. It has clearly and easily programing interface for ORM and sql directly, and supoorted Access, Sql Server, MySql, SQLite, Firebird, PostgreSQL and Oracle. It also provide a Ruby On Rails style MVC framework. Asp.Net DataSource and a simple IoC. DbEntry.Net.v4.2.Setup.zip include the setup package. DbEntry.Net.v4.2.Src.zip include source files and unit tests. DbEntry.Net.v4.2.Samples.zip ...Recaptcha for .NET: Recaptcha for .NET v1.5: What's NewMinor bug fixes Support for legacy .NET framework 4.0 and ASP.NET MVC 4. Support for .NET Framework 4.5.1.Azure Storage Explorer: Azure Storage Explorer 6 Preview 1: Welcome to Azure Storage Explorer 6 Preview 1 This is the first release of the latest Azure Storage Explorer, code-named Phoenix. What's New?Here are some important things to know about version 6: Open Source Now being run as a full open source project. Full source code on CodePlex. Collaboration encouraged! Updated Code Base Brand-new code base (WPF/C#/.NET 4.5) Visual Studio 2013 solution (previously VS2010) Uses the Task Parallel Library (TPL) for asynchronous background operat...Wsus Package Publisher: release v1.3.1407.29: Updated WPP to recognize the very latest console version. Some files was missing into the latest release of WPP which lead to crash when trying to make a custom update. Add a workaround to avoid clipboard modification when double-clicking on a label when creating a custom update. Add the ability to publish detectoids. (This feature is still in a BETA phase. Packages relying on these detectoids to determine which computers need to be updated, may apply to all computers).SSIS Connector for Sharepoint Online: Sharepoint_Online_ConnectorsV2.3.5: V2.3.5 Removed validation for user name where it was accepting user name with only ".com" V2.3.1 Added support for column with string array type. (For source only) V2.3 Added support for delete operation. just map ID columns and choose operation type as "Delete" Any other Operation Type will be taken as Insert\Update MultiUser type will be taken as User field and single user will be added, still figuring out how to add multi user field (whatever available on internet not helping :( ) V2.2 ...Artezio SharePoint 2013 Workflow Activities for VS & Actions for Designer: SharePoint Designer 2013 custom workflow actions 1.0: SharePoint Designer 2013 custom workflow actions that work with permissions. Add Role Assignment Add Role Assignments Delete Role Assignments Get Role Definition Id Get Role Definition Id By Role Type Id Reset Role Inheritance Artezio SharePoint Consulting & DevelopmentVG-Ripper & PG-Ripper: PG-Ripper 1.4.32: changes NEW: Added Support for 'ImgMega.com' links NEW: Added Support for 'ImgCandy.net' links NEW: Added Support for 'ImgPit.com' links NEW: Added Support for 'Img.yt' links FIXED: 'Radikal.ru' links FIXED: 'ImageTeam.org' links FIXED: 'ImgSee.com' links FIXED: 'Img.yt' linksAsp.Net MVC-4,Entity Framework and JQGrid Demo with Todo List WebApplication: Asp.Net MVC-4,Entity Framework and JQGrid Demo: Asp.Net MVC-4,Entity Framework and JQGrid Demo with simple Todo List WebApplication, Overview TodoList is a simple web application to create, store and modify Todo tasks to be maintained by the users, which comprises of following fields to the user (Task Name, Task Description, Severity, Target Date, Task Status). TodoList web application is created using MVC - 4 architecture, code-first Entity Framework (ORM) and Jqgrid for displaying the data.Waterfox: Waterfox 31.0 Portable: New features in Waterfox 31.0: Added support for Unicode 7.0 Experimental support for WebCL New features in Firefox 31.0:New Add the search field to the new tab page Support of Prefer:Safe http header for parental control mozilla::pkix as default certificate verifier Block malware from downloaded files Block malware from downloaded files audio/video .ogg and .pdf files handled by Firefox if no application specified Changed Removal of the CAPS infrastructure for specifying site-sp...SuperSocket, an extensible socket server framework: SuperSocket 1.6.3: The changes below are included in this release: fixed an exception when collect a server's status but it has been stopped fixed a bug that can cause an exception in case of sending data when the connection dropped already fixed the log4net missing issue for a QuickStart project fixed a warning in a QuickStart projectYnote Classic: Ynote Classic 2.8.5 Beta: Several Changes - Multiple Carets and Multiple Selections - Improved Startup Time - Improved Syntax Highlighting - Search Improvements - Shell Command - Improved StabilityTEBookConverter: 1.2: Fixed: Could not start convertion in some cases Fixed: Progress show during convertion was truncated Fixed: Stopping convertion didn't reset program titleNew Projects.NET to C/C++ Interop Sample: Reference projects that demonstrate how to invoke C/C++ code from .NET using C#.Aircraft Guidance Systems: Software for RC model control and navigationDNN Upgrade Wizard: Provides a simple method for upgrading a DNN siteDrop and Create indexes SSIS Task: This is a simple SSIS Control Flow Task which can be used to drop and re-create all non-primary key indexes associated with specified table.Dynamics CRM 2013 Global Advanced Find: Dynamics CRM 2013 Global Advanced Find adds a button to the global navigation allowing users to launch an advanced find window from anywhere in the system.itsanewproject: sdlfjsldjfJNMSJW: hahaOmni Kits: A library tend to help in projects for any purpose while keeping small.Phase1Week1: Learning how to post, publish, and patch my commits to CodePlex with homework using bootsrap and font awesome.PL_Last1: Pll1Tabletoparmy: Tabletoparmy ist ein Tool um Tabletop Miniaturen zu Verwalten. Aktuell gibt es nur die Möglichkeit ein X-Wing Turnier zu verwalten.TalesAndTiles: Tile based game developed with Microsoft XNA Framework 4.0??????: ?????????

    Read the article

  • Clientside anticheating in multiplayer game 1vs1

    - by garnav
    I'm developing a simple card game, where there will be a matchmaking system that will put you against another human player. This will be the only game mode available, a 1vs1 against another human, no AI. I want to prevent cheating as much as possible. I have already read a lot of similar questions here and I already know that I cannot trust the client and I have to make all verifications server side. I intend to have a server (need one for the matchmaking anyway) and I intend to make some verifications server side but if I want to check everything server side this makes my server to be able to keep track of the state of all current games and check every action, and I don't have the money/infrastructure to support that server. My idea is to make clients check and verify some of the actions made by their opponent* and if they find some illegal action notify the possible cheating to the server and make the server verify it. This will still require my server to keep track of all current games, but it will save resources only checking some things that cannot be checked at client side(like card order in the deck) and only checking other things when they are actually wrong. *(only those they can check with out allowing themselves cheating! for example:they can't check if the played card was in hand cos that will need them to know all cards in hand) Summing up, my questions are: is this a viable approach? will I actually save resources doing this or the extra complexity in the server and client for exchanging this messages is not worth it? do you know any game that has successfully or unsuccessfully tried a similar approach? Thanks all for reading and answering

    Read the article

  • ArchBeat Link-o-Rama for 11/22/2011

    - by Bob Rhubart
    A Brief Introduction on Migrating to an Oracle-based Cloud Environment | Tom Laszewski "Before you can start migrating to the cloud, you must define what the cloud means to you," says Tom Laszeski. "The cloud is not a specific software or hardware product; contrary to what many technology vendors would have you believe." Custom Exception Registration for ADF BC EO Attribute | Andrejus Baranovskis "Sometimes customers prefer to implement business logic validation completely in Java, without using ADF BC declarative/Groovy validation rules," says Oracle ACE Director Andrejus Baranovskis. "Thats fine, we can code business logic validation in ADF and implement different custom validation methods on VO/EO level." Oracle Exadata Virtual Conference - Jan 20 2012 The Exadata SIG, along with IOUG, is organizing the First Exadata Virtual Conference, to be held on January 20, 2012. Proposals for presentations are now being accepted. Smooth Sailing or Rough Waters: Navigating Policy Administration Modernization | Helen Pitts "It’s no surprise that fueling growth, both now and in the future, continues to be a key driver for modernization" says Helen Pitts. "Why? Inflexible, hard-coded, legacy systems require customization by IT every time a change is required." Architects putting on the Ritz; Info integration book learning; Platform for SAS Grid Computing This week on the Architect Home Page on OTN. Webcast: Introducing Oracle WebLogic Server 12c: Developer Deep Dive - Dec 1 - 11am PT / 2pm ET Learn how Oracle WebLogic Server 12c enables rapid development of modern, lightweight Java EE 6 applications. Discover how you can leverage the latest development technologies, tools and standards when deploying to Oracle WebLogic Server across both conventional and Cloud environments. Architecture all day. Oracle Technology Network Architect Day - Phoenix, AZ - Dec14. Free registration. When: December 14, 2011 Where: The Ritz-Carlton, Phoenix, 2401 East Camelback Road, Phoenix, AZ 85016 Registration is free, but seating is limited.

    Read the article

  • XNA 4.0: Problem with loading XML content files

    - by 200
    I'm new to XNA so hopefully my question isn't too silly. I'm trying to load content data from XML. My XML file is placed in my Content project (the new XNA 4 thing), called "Event1.xml": <?xml version="1.0" encoding="utf-8" ?> <XnaContent> <Asset Type="MyNamespace.Event"> // error on this line <name>1</name> </Asset> </XnaContent> My "Event" class is placed in my main project: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml.Linq; namespace MyNamespace { public class Event { public string name; } } The XML file is being called by my main game class inside the LoadContent() method: Event temp = Content.Load<Event>("Event1"); And this is the error I'm getting: There was an error while deserializing intermediate XML. Cannot find type "MyNamespace.Event" I think the problem is because at the time the XML is being evaluated, the Event class has not been established because one file is in my main project while the other is in my Content project (being a XNA 4.0 project and such). I have also tried changing the build action of the xml file from compile to content; however the program would then not be able to find the file and would give this other warning: Project item 'Event1.xml' was not built with the XNA Framework Content Pipeline. Set its Build Action property to Compile to build it. Any suggestions are appreciated. Thanks.

    Read the article

< Previous Page | 292 293 294 295 296 297 298 299 300 301 302 303  | Next Page >