Search Results

Search found 35244 results on 1410 pages for 'version numbers'.

Page 960/1410 | < Previous Page | 956 957 958 959 960 961 962 963 964 965 966 967  | Next Page >

  • Integrating different branches from external sources into a single Mercurial repository

    - by dukeofgaming
    I'm currently working in a company using Perforce and am making way for distributed version control with Mercurial. I've had success importing Perforce history using the perfarce (quite a suitable name, I laugh every time I see/say it) however, this only works with a single branch at a time. Here's how my P4 integration setup works: In perforce, create a "client", which is kind of a description of what you will be constantly updating/checking-out. This can only address one branch at a time (trunk or other). Once you do this, run hg clone p4://<server>/<client_name> Go to .hg/hgrc and put the perforce path line: perforce = p4://<server>/<client_name> Work normally with the code under mercurial, do hg pull perforce to sync up, hg push to export a changelist What I'd like to be able to do is have a perforce path per branch and have everything work in the same repository. Now, pushing is not a problem, however, if I pull the history from another branch it would end up at the default branch. I'd like to be able to do something like hg pull perforce-R5 and have it land in mercurial's R5 branch. Even if I have no merging history, it would be sweet enough to be able to preserve it. There are also other plugins for CVCSs that let you integrate mercurial, but AFAIK the subversion one has the same problem. I don't think there is a straight-through way of doing this, but as long as I could automate the process with some hooks and scripts in a single Mercurial machine, that would be good enough.

    Read the article

  • USB Mouse and Keyboard not working in Linux 4 Tegra

    - by Sijo
    I am a new person in Tegra Linux development. I have Tamontem NG Evaluation board with Tegra 3 Chip. I installed L4T sample file system from NVIDIA tegra Resources (https://developer.nvidia.com/linux-tegra) and installed the file system as described in the documentation provided in NVIDIA site. Already these was an SD card with L4T running. i dont want to change the boot loader. So I copied the boot.scr.uimg to root (/) folder and uImage to boot(/boot/) and it starts booting from the existing SD card. After that while booting, some errors occurred in some Bluetooth devices (there is no bluetooth device in the board). So I disabled Bluetooth by giving the following command sudo mv /etc/init/bluetooth.conf /etc/init/bluetooth.conf.noexec Now the problem is that mouse and keyboard are not working. So i cannot login. Even though i installed desktop, the mouse and keyboard are not working. But mouse and keyboard are enumerating. lsusb command is showing the USB mouse and keyboard. The installed file system is Ubuntu 13.04. Linux Kernel version is 3.1 What to do. Please help.Thanks in Advance.

    Read the article

  • .NET to iOS: From WinForms to the iPad

    - by RobertChipperfield
    One of the great things about working at Red Gate is getting to play with new technology - and right now, that means mobile. A few weeks ago, we decided that a little research into the tablet computing arena was due, and purely from a numbers point of view, that suggested the iPad as a good target device. A quick trip to iPhoneDevCon in San Diego later, and Marine and I came back full of ideas, and with some concept of how iOS development was meant to work. Here's how we went from there to the release of Stacks & Heaps, our geeky take on the classic "Snakes & Ladders" game. Step 1: Buy a Mac I've played with many operating systems in my time: from the original BBC Model B, through DOS, Windows, Linux, and others, but I'd so far managed to avoid buying fruit-flavoured computer hardware! If you want to develop for the iPhone, iPad or iPod Touch, that's the first thing that needs to change. If you've not used OS X before, the first thing you'll realise is that everything is different! In the interests of avoiding a flame war in the comments section, I'll only go so far as to say that a lot of my Windows-flavoured muscle memory no longer worked. If you're in the UK, you'll also realise your keyboard is lacking a # key, and that " and @ are the other way around from normal. The wonderful Ukelele keyboard layout editor restores some sanity here, as long as you don't look at the keyboard when you're typing. I couldn't give up the PC entirely, but a handy application called Synergy comes to the rescue - it lets you share a single keyboard and mouse between multiple machines. There's a few limitations: Alt-Tab always seems to go to the Mac, and Windows 7's UAC dialogs require the local mouse for security reasons, but it gets you a long way at least. Step 2: Register as an Apple Developer You can register as an Apple Developer free of charge, and that lets you download XCode and the iOS SDK. You also get the iPhone / iPad emulator, which is handy, since you'll need to be a paid member before you can deploy your apps to a real device. You can either enroll as an individual, or as a company. They both cost the same ($99/year), but there's a few differences between them. If you register as a company, you can add multiple developers to your team (all for the same $99 - not $99 per developer), and you get to use your company name in the App Store. However, you'll need to send off significantly more documentation to Apple, and I suspect the process takes rather longer than for an individual, where they just need to verify some credit card details. Here's a tip: if you're registering as a company, do so as early as possible. The approval process can take a while to complete, so get the application in in plenty of time. Step 3: Learn to love the square brackets! Objective-C is the language of the iPad. C and C++ are also supported, and if you're doing some serious game development, you'll probably spend most of your time in C++ talking OpenGL, but for forms-based apps, you'll be interacting with a lot of the Objective-C SDK. Like shifting from Ctrl-C to Cmd-C, it feels a little odd at first, with the familiar string.format(.) turning into: NSString *myString = [NSString stringWithFormat:@"Hello world, it's %@", [NSDate date]]; Thankfully XCode's auto-complete is normally passable, if not up to Visual Studio's standards, which coupled with a huge amount of content on Stack Overflow means you'll soon get to grips with the API. You'll need to get used to some terminology changes, though; here's an incomplete approximation: Coming from a .NET background, there's some luxuries you no longer have developing Objective C in XCode: Generics! Remember back in .NET 1.1, when all collections were just objects? Yup, we're back there now. ReSharper. Or, more generally, very much refactoring support. The not-many-keystrokes to rename a class, its file, and al references to it in Visual Studio turns into a much more painful experience in XCode. Garbage collection. This is actually rather less of an issue than you might expect: if you follow the rules, the reference counting provided by Objective C gets you a long way without too much pain. Circular references are their usual problematic self, though. Decent exception handling. You do have exceptions, but they're nowhere near as widely used. Generally, if something goes wrong, you get nil (see translation table above) back. Which brings me on to. Calling a method on a nil object isn't a failure - it just returns nil itself! There's many arguments for and against this, but personally I fall into the "stuff should fail as quickly and explicitly as possible" camp. Less specifically, I found that there's more chance of code failing at runtime rather than getting caught at compile-time: using the @selector(.) syntax to pass a method signature isn't (can't be) checked at compile-time, so the first you know about a typo is a crash when you try and call it. The solution to this is of course lots of great testing, both automated and manual, but I still find comfort in provably correct type safety being enforced in addition to testing. Step 4: Submit to the App Store Assuming you want to distribute to more than a handful of devices, you're going to need to submit your app to the Apple App Store. There's a few gotchas in terms of getting builds signed with the right certificates, and you'll be bouncing around between XCode and iTunes Connect a fair bit, but eventually you get everything checked off the to-do list, and are ready to upload your first binary! With some amount of anticipation, I pressed the Upload button in XCode, ready to release our creation into the world, but was instead greeted by an error informing me my XML file was malformed. Uh. A little Googling later, and it turned out that a simple rename from "Stacks&Heaps.app" to "StacksAndHeaps.app" worked around an XML escaping bug, and we were good to go. The next step is to wait for approval (or otherwise). After a couple of weeks of intensive development, this part is agonising. Did we make it? The Apple jury is still out at the moment, but our fingers are firmly crossed! In the meantime, you can see some screenshots and leave us your email address if you'd like us to get in touch when it does go live at the MobileFoo website. Step 5: Profit! Actually, that wasn't the idea here: Stacks & Heaps is free; there's no adverts, and we're not going to sell all your data either. So why did we do it? We wanted to get an idea of what it's like to move from coding for a desktop environment, to something completely different. We don't know whether in a year's time, the iPad will still be the dominant force, or whether Android will have smoothed out some bugs, tweaked the performance, and polished the UI, but I think it's a fairly sure bet that the tablet form factor is here to stay. We want to meet people who are using it, start chatting to them, and find out about some of the pain they're feeling. What better way to do that than do it ourselves, and get to write a cool game in the process?

    Read the article

  • Slick 2D first trial error

    - by pringlesinn
    I followed some advices to learn Slick2D and when I started doing the "SimpleGame" I got my first error. Does anyone have any idea of what is it and how to fix? Sun Dec 26 23:09:12 GMT-03:00 2010 INFO:Slick Build #274 Sun Dec 26 23:09:12 GMT-03:00 2010 INFO:LWJGL Version: 2.0b1 Sun Dec 26 23:09:12 GMT-03:00 2010 INFO:OriginalDisplayMode: 1024 x 768 x 16 @60Hz Sun Dec 26 23:09:12 GMT-03:00 2010 INFO:TargetDisplayMode: 800 x 600 x 0 @0Hz Sun Dec 26 23:09:12 GMT-03:00 2010 ERROR:Could not find a valid pixel format org.lwjgl.LWJGLException: Could not find a valid pixel format at org.lwjgl.opengl.WindowsPeerInfo.nChoosePixelFormat(Native Method) at org.lwjgl.opengl.WindowsPeerInfo.choosePixelFormat(WindowsPeerInfo.java:52) at org.lwjgl.opengl.WindowsDisplayPeerInfo.initDC(WindowsDisplayPeerInfo.java:54) at org.lwjgl.opengl.WindowsDisplay.createWindow(WindowsDisplay.java:158) at org.lwjgl.opengl.Display.createWindow(Display.java:299) at org.lwjgl.opengl.Display.create(Display.java:848) at org.lwjgl.opengl.Display.create(Display.java:800) at org.newdawn.slick.AppGameContainer.tryCreateDisplay(AppGameContainer.java:299) at org.newdawn.slick.AppGameContainer.access$000(AppGameContainer.java:34) at org.newdawn.slick.AppGameContainer$2.run(AppGameContainer.java:364) at java.security.AccessController.doPrivileged(Native Method) at org.newdawn.slick.AppGameContainer.setup(AppGameContainer.java:345) at org.newdawn.slick.AppGameContainer.start(AppGameContainer.java:314) at SimpleGame.main(SimpleGame.java:38) Exception in thread "main" org.newdawn.slick.SlickException: Failed to initialise the LWJGL display at org.newdawn.slick.AppGameContainer.setup(AppGameContainer.java:375) at org.newdawn.slick.AppGameContainer.start(AppGameContainer.java:314) at SimpleGame.main(SimpleGame.java:38)

    Read the article

  • Booting from USB on Mac Air (using setup_mac_usb_boot.sh)

    - by Mike O
    So, I've been working on this for hours and it's getting a little tiring. As some of you may know, installing Ubuntu on Macs is frequently an adventure, and I'm experiencing that right now. The part I'm hung up on at the moment is making a bootable USB. I would just use a CD, but my laptop is a MacBook Air (which doesn't have a CD drive), and I don't own an external CD drive. I initially attempted to use the command line method supplied by the Ubuntu documentation here: https://help.ubuntu.com/community/How%20to%20install%20Ubuntu%20on%20MacBook%20using%20USB%20Stick However, that wasn't even recognized by rEFIt even when I made a number of different modifications to the process, so I quickly decided to look elsewhere. I came across this guide: https://help.ubuntu.com/community/MacBookAir4-2#Basic_Installation_Instructions This ended up working to a large extent. If I choose the supplied grub from rEFIt, it will bring me to the Ubuntu grub, asking me to try it, install, or check the disk. And if I choose to boot Linux directly from rEFIt, it will bring me to the language selection menu. But when I make my selection from either of these menus it pauses for about ten seconds and then gives me a command line error message. It begins with kernel panic - not syncing timer doesn't work through interrupt, and then shows about eight file names. Does anyone here have any ideas as to what can be causing this? I also tried the script with both Ubuntu 11.10 (the current version when the script was written) and 12.04.

    Read the article

  • Master-Details with RadGridView for Silverlight 4, WCF RIA Services RC2 and Entity Framework 4.0

    I have prepared a sample project with the Silverlight 4 version of RadGridView released yesterday. The sample project was created with Visual Studio 2010, WCF RIA Services RC 2 for Visual Studio 2010, and ADO.NET Entity Framework (.NET 4). I have decided to use the SalesOrderHeader and SalesOrderDetails tables from the Adventure Works Database, because they provide the perfect one-to-many relationship: I will not go over the steps for creating the ADO.NET Entity Data Model and the Domain Service Class. In case you are not familiar with them, you should start with Brad Abrams series of blog posts and read this blog after that. To enable the master-details relationship we need to modify two things. First of all we need to include the automatic retrieval of the child entities in the domain service class. We do this by using the Include method: 1: public IQueryable<SalesOrderHeader> GetSalesOrderHeaders()...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Use 3 monitors w/built-in intel adapter + two old nvidia PCI cards on 10.10?

    - by Kendall Gifford
    I'd like to move from windows with my current workstation. The only thing holding me back is that I have 3 monitors connected to the system and I really take advantage of the real estate when working. I just installed Ubuntu 10.10 on the system and one of the monitors is up and running just fine. This monitor is connected to the built-in Intel adapter. I also have two old nVidia GeForce4 MX 4000 (nv19pl) cards in my two PCI slots with two monitors connected to them respectively. I installed the legacy (and proprietary) nVidia drivers (the nvidia-96 package) that claims to support these old cards. Now the question is how to get X configured to use all adapters (using two different drivers) so I can use all three monitors (and is this even possible)? From what I've read, it looks like I'll have to write an xorg.conf file since the nVidia driver doesn't support the auto-magic configuration supported by other drivers. On this site: http://wiki.ubuntu.com/X/Config it says that on 10.10 I just need to write an xorg.conf "containing only those sections and options that you need to override Xorg's autoconfigurated settings". So, does this mean I can get away with only including the nVidia-specific configuration stuff and all else will get auto-configured? Or, will providing a config with a "Device" section overrule the auto-magic from detecting/using the Intel adapter? I ran the included nvidia-xconfig to generate a basic, nVidia-specific xorg.conf but I'm hesitant to reboot with it in place, suspecting I'll have a screwed up display. Also, is there any way (any tool or command) to generate an xorg.conf from the current, auto-configured running state of an X session? If I have to write a full, complete config, I'd rather start with one that includes everything that's been auto-detected thus far (and merge it with my nVidia version). Anyhow, any info and thoughts are greatly appreciated (as are answers).

    Read the article

  • ubuntu one syncronizing problems

    - by user72249
    I am a user of Ubuntu and Ubuntu One. First: I had a serious problem with ubuntu one. I have a windows and several Ubuntu machine and they all sync one account. My first problem is that i can't really delete files. Whenever I delete something Ubu One resync it, so it appears again in my folder. Furthermore, when i move something to another dir it synconize the new location and resync the old one, so i got doubled my files. I can't reorganize my files. So i tried to delete the duplicated files through the web dash, but i cant reorder and select multiple files by extension. For example i wanted to move all PDF to another location... Second: I can't configure in the ubuntu one app the loacation of the main One folder. For example i want my One folder to be on another partition than my HOME folder. It is a main problem in linux and windows also. I tried to move that folder in windows with the hardlink method. So i uninstalled the U1, then i created a folder link to another drive, than installed the U1. THEN i lost all my files in my U1. Lucky thing that i have a backup, but i thought U1 is a stable solution for me, and i planned to extend my space, but these bugs are major problems! It's worth to pay for it if it's working perfectly. I think it's more important than releasing a new version in every month.

    Read the article

  • The Strange History of the Honeywell Kitchen Computer

    - by Jason Fitzpatrick
    In 1969 the Honeywell corporation released a $10,000 kitchen computer that weighed 100 pounds, was as big as a table, and required advanced programming skills to use. Shockingly, they failed to sell a single one. Read on to be dumbfounded by how ahead of (and out of touch with) its time the Honeywell Kitchen Computer was. Wired delves into the history of the device, including how difficult it was to use: Now try to imagine all that in late 1960s kitchen. A full H316 system wouldn’t have fit in most kitchens, says design historian Paul Atkinson of Britain’s Sheffield Halam University. Plus, it would have looked entirely out of place. The thought that an average person, like a housewife, could have used it to streamline chores like cooking or bookkeeping was ridiculous, even if she aced the two-week programming course included in the $10,600 price tag. If the lady of the house wanted to build her family’s dinner around broccoli, she’d have to code in the green veggie as 0001101000. The kitchen computer would then suggest foods to pair with broccoli from its database by “speaking” its recommendations as a series of flashing lights. Think of a primitive version of KITT, without the sexy voice. Hit up the link below for the full article. How To Use USB Drives With the Nexus 7 and Other Android Devices Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It

    Read the article

  • OpenWorld on your iPad and iPhone

    - by KLaker
    Most of you probably know that each year I publish a data warehouse guide for OpenWorld which contains links to the latest data warehouse videos, a calendar for the most important sessions and labs and a section that provides profiles and relevant links for all the most important data warehouse presenters. For this year’s conference made all this information available in an HTML app that runs on most smartphones and tablets. The pictures below show the HTML app running on iPad and iPhone. This exciting new web app contains information about why you should attend OpenWorld - just in case you have not yet booked your ticket! - as well as the following information: Getting to know 12c - a series of video interviews with George Lumpkin, Vice President of Data Warehouse Product Management Your presenters - full biographies and links to social media sites for all the key data warehouse presenters Must sees sessions - list of all the most important data warehouse presentations at this year’s conference Our customers - profiles our most important data warehouse customers Must attend labs - list of all the most important data warehouse hands-on labs at this year’s conference Links - a list of links to the most important data warehouse sites If you want to run these web apps on your smartphone and/or tablet then follow these links: iPhone - https://876d5e65b7768ca57d1fd1236578c9374b1fca87.googledrive.com/host/0Bz-zGlWahRf4OXNzejBiRFV5ZXc/iPhone-DWoow2014.html iPad - https://876d5e65b7768ca57d1fd1236578c9374b1fca87.googledrive.com/host/0Bz-zGlWahRf4OXNzejBiRFV5ZXc/iPad-DW2014.html Android users: I have tested the app on Android and there appears to be a bug in the way the Chrome browser displays iframes because scrolling does not work . The app does work correctly if you use either the Android version of the Opera browser or the standard Samsung browser. If you have any comments about the app (content you would like to see) then please let me know. Enjoy OpenWorld.

    Read the article

  • Desktop Fun: Merry Christmas Icon Packs

    - by Asian Angel
    Christmas is getting closer, so it is time to start decorating your desktops! Today we have a collection of fun and colorful Merry Christmas icons to help get you and your desktop ready for the holidays. Note: To customize the icon setup on your Windows 7 & Vista systems see our article here. Using Windows XP? We have you covered here. Sneak Preview Here is the holiday desktop that we put together using the Standard Christmas Icons 2010.1 pack shown below. Note: The original, unmodified version of this wallpaper can be found here. A closer look at the fun icons we used on our desktop… The Icon Packs Charlie Brown Christmas *.ico format only Download Frosty the Snowman 1.0 *.ico format only Download Winter Icons 1.0 *.ico format only Download Christmas Icons Set 1 1.0 *.ico format only Download Christmas Icons Set 2 1.0 *.ico format only Download Wreaths Icons 1.0 *.ico format only Download SketchCons Christmas *.ico format only Download Standard Christmas Icons 2010.1 *.ico, .png, .bmp, and .gif format Download Christmas Icons *.ico format only Download Christmas *.ico, .png, and .icns format Download Silent Night *.png format only Download My Christmas 1.0 *.ico and .png format Download Xmas Festival *.png format only Download Xmas Stickers *.png format only Download Winter Wonderland *.ico format only Download Wanting more great icon sets to look through? Be certain to visit our Desktop Fun section for more icon goodness! Latest Features How-To Geek ETC The Complete List of iPad Tips, Tricks, and Tutorials The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor The Brothers Mario – Epic Gangland Style Mario Brothers Movie Trailer [Video] Score Awesome Games on the Cheap with the Humble Indie Bundle Add a Colorful Christmas Theme to Your Windows 7 Desktop This Windows Hack Changes the Blue Screen of Death to Red Edit Images Quickly in Firefox with Pixlr Grabber Zoho Writer, Sheet, and Show Now Available in Chrome Web Store

    Read the article

  • ANNOUNCEMENT: Oracle VM 3 Templates Available for Oracle Secure Global Desktop 4.62

    - by Mohan Prabhala
    Today, we are proud to announce the general availability of Oracle VM 3 templates for Oracle Secure Global Desktop version 4.62.  With Oracle VM 3 templates, anyone using Oracle VM 3 need not download, install and configure the Operating System and product(s) individually. In this case, the supported operating system (Oracle Linux 5.7) and Oracle Secure Global Dekstop 4.62 product is packaged together into a template that one can easily import and clone as a VM into Oracle VM 3. This results in a nearly instant deployment and configuration of Oracle Secure Global Desktop within Oracle VM 3.  This means drastically reducing the evaluation and deployment time for Oracle Secure Global Desktop when leveraging Oracle VM 3. Feel free to give it a try! Login into the Oracle VM section at Oracle Software Delivery Cloud  (click on 'Cloud Portal (Main)' at the top-right) and: Under Oracle VM templates - x86 64-bit, look for Oracle VM 3 Template (OVF) for Oracle Secure Global Desktop Media Pack for x86_64 (64 bit) Oracle Secure Global Desktop 4.62 template for x86_64 (64 bit) with Oracle Linux 5.7 Under Oracle VM templates – x86 32 bit, look for Oracle VM 3 Template (OVF) for Oracle Secure Global Desktop Media Pack for x86 (32 bit) Oracle Secure Global Desktop 4.62 template for x86 (32 bit) with Oracle Linux 5.7 Download any of the above templates. Once you are done, you must First import the assembly (ova) file that you downloaded from Oracle Software Delivery Cloud Next, create a virtual machine template from the assembly And finally create a virtual machine from the template. Once the virtual machine is created and starts up, be sure to configure the networking parameters (hostname, IP address, netmask, gateway etc), and optional user parameters correctly. You must also enter a root password during first boot. And that's it - the Oracle Secure Global Desktop install script will pick up the networking parameters, prompt for confirmation and complete a default installation. Once the installation is complete, you may want to refer to the Oracle Secure Global Desktop Administration Guide to learn more about Oracle Secure Global Desktop and its capabilities.

    Read the article

  • Get the Windows 8 Charms Bar in Windows 7, Vista, and XP Using a RocketDock Skin

    - by Lori Kaufman
    Have you tried one of the Windows 8 Preview releases and found you like the Charms bar on the Metro Start Screen? If you’re not quite ready to give up Windows 7, there is a way to get the Charms bar from Windows 8. You can easily add the Charms bar to your desktop in Windows 7 using a RocketDock skin. RocketDock is a free, customizable application launcher for Windows. See our article about RocketDock to learn how to add it to your Windows Desktop. You can also use a portable version of RocketDock. To add a “Charms bar” to your Windows 7 desktop, extract the .rar file you downloaded (see the link at the end of this article). RAR files are associated with WinRAR, which is shareware. NOTE: You can use WinRAR free of charge for 40 days but then you have to buy it ($29.00). However, you can also use the free program 7-Zip to extract RAR files. HTG Explains: What Is Windows RT and What Does It Mean To Me? HTG Explains: How Windows 8′s Secure Boot Feature Works & What It Means for Linux Hack Your Kindle for Easy Font Customization

    Read the article

  • Introduction to LinqPad Driver for StreamInsight 2.1

    - by Roman Schindlauer
    We are announcing the availability of the LinqPad driver for StreamInsight 2.1. The purpose of this blog post is to offer a quick introduction into the new features that we added to the StreamInsight LinqPad driver. We’ll show you how to connect to a remote server, how to inspect the entities present of that server, how to compose on top of them and how to manage their lifetime. Installing the driver Info on how to install the driver can be found in an earlier blog post here. Establishing connections As you click on the “Add Connection” link in the left pane you will notice that now it’s possible to build the data context automatically. The new driver appears as an option in the upper list, and if you pick it you will open a connection dialog that lets you connect to a remote StreamInsight server. The connection dialog lets you specify the address of the remote server. You will notice that it’s possible to pick up the binding information from the configuration file of the LinqPad application (which is normally in the same folder as LinqPad.exe and is called LinqPad.exe.config). In order for the context to be generated you need to pick an application from the server. The control is editable hence you can create a new application if you don’t want to make changes to an existing application. If you choose a new application name you will be prompted for confirmation before this gets created. Once you click OK the connection is created and you can start issuing queries against the remote server. If there’s any connectivity error the connection is marked with a red X and you can see the error message informing you what went wrong (i.e., the remote server could not be reached etc.). The context for remote servers Let’s take a look at what happens after we are connected successfully. Every LinqPad query runs inside a context – think of it as a class that wraps all the code that you’re writing. If you’re connecting to a live server the context will contain the following: The application object itself. All entities present in this application (sources, sinks, subjects and processes). The picture below shows a snapshot of the left pane of LinqPad after a successful connection. Every entity on the server has a different icon which will allow users to figure out its purpose. You will also notice that some entities have a string in parentheses following the name. It should be interpreted as such: the first name is the name of the property of the context class and the second name is the name of the entity as it exists on the server. Not all valid entity names are valid identifier names so in cases where we had to make a transformation you see both. Note also that as you hover over the entities you get IntelliSense with their types – more on that later. Remoting is not supported As you play with the entities exposed by the context you will notice that you can’t read and write directly to/from them. If for instance you’re trying to dump the content of an entity you will get an error message telling you that in the current version remoting is not supported. This is because the entity lives on the remote server and dumping its content means reading the events produced by this entity into the local process. ObservableSource.Dump(); Will yield the following error: Reading from a remote 'System.Reactive.Linq.IQbservable`1[System.Int32]' is not supported. Use the 'Microsoft.ComplexEventProcessing.Linq.RemoteProvider.Bind' method to read from the source using a remote observer. This basically tells you that you can call the Bind() method to direct the output of this source to a sink that has to be defined on the remote machine as well. You can’t bring the results to the LinqPad window unless you write code specifically for that. Compose queries You may ask – what's the purpose of all that? After all the same information is present in the EventFlowDebugger, why bother with showing it in LinqPad? First of all, What gets exposed in LinqPad is not what you see in the debugger. In LinqPad we have a property on the context class for every entity that lives on the server. Because LinqPad offers IntelliSense we in fact have much more information about the entity, and more importantly we can compose with that entity very easily. For example, let’s say that this code creates an entity: using (var server = Server.Connect(...)) {     var a = server.CreateApplication("WhiteFish");     var src = a         .DefineObservable<int>(() => Observable.Range(0, 3))         .Deploy("ObservableSource"); If later we want to compose with the source we have to fetch it and then we can bind something to     a.GetObservable<int>("ObservableSource)").Bind(... This means that we had to know a bunch of things about this: that it’s a source, that it’s an observable, it produces a result with payload Int32 and it’s named “ObservableSource”. Only the second and last bits of information are present in the debugger, by the way. As you type in the query window you see that all the entities are present, you get IntelliSense support for them and it’s much easier to make sense of what’s available. Let’s look at a scenario where composition is plausible. With the new programming model it’s possible to create “cold” sources that are parameterized. There was a way to accomplish that even in the previous version by passing parameters to the adapters, but this time it’s much more elegant because the expression declares what parameters are required. Say that we hover the mouse over the ThrottledSource source – we will see that its type is Func<int, int, IQbservable<int>> - this in effect means that we need to pass two int parameters before we can get a source that produces events, and the type for those events is int – in the particular case of my example I had the source produce a range of integers and the two parameters were the start and end of the range. So we see how a developer can create a source that is not running yet. Then someone else (e.g. an administrator) can pass whatever parameters appropriate and run the process. Proxy Types Here’s an interesting scenario – what if someone created a source on a server but they forgot to tell you what type they used. Worse yet, they might have used an anonymous type and even though they can refer to it by name you can’t figure out how to use that type. Let’s walk through an example that shows how you can compose against types you don’t need to have the definition of. This is how we can create a source that returns an anonymous type: Application.DefineObservable(() => Observable.Range(1, 10).Select(i => new { I = i })).Deploy("O1"); Now if we refresh the connection we can see the new source named O1 appear in the list. But what’s more important is that we now have a type to work with. So we can compose a query that refers to the anonymous type. var threshold = new StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0<int>(5); var filter = from i in O1              where i > threshold              select i; filter.Deploy("O2"); You will notice that the anonymous type defined with this statement: new { I = i } can now be manipulated by a client that does not have access to it because the LinqPad driver has generated another type in its stead, named StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0. This type has all the properties and fields of the type defined on the server, except in this case we can instantiate values and use it to compose more queries. It is worth noting that the same thing works for types that are not anonymous – the test is if the LinqPad driver can resolve the type or not. If it’s not possible then a new type will be generated that approximates the type that exists on the server. Control metadata In addition to composing processes on top of the existing entities we can do other useful things. We can delete them – nothing new here as we simply access the entities through the Entities collection of the application class. Here is where having their real name in parentheses comes handy. There’s another way to find out what’s behind a property – dump its expression. The first line in the output tells us what’s the name of the entity used to build this property in the context. Runtime information So let’s create a process to see what happens. We can bind a source to a sink and run the resulting process. If you right click on the connection you can refresh it and see the process present in the list of entities. Then you can drag the process to the query window and see that you can have access to process object in the Processes collection of the application. You can then manipulate the process (delete it, read its diagnostic view etc.). Regards, The StreamInsight Team

    Read the article

  • Unknown CSS font-family oddity with IE7-10 on Win Vista-8

    - by Jeff
    I am seeing the following "oddity" with IE7-10 on Win Vista-8: When declaring font-family: serif; I am seeing an old bitmapped serif font that I can't identify (see screenshot below) instead of the expected font Times New Roman. I know it's an old bitmapped font because it displays aliased, without any font smoothing, with IE7-10 on Win Vista-8 (just like Courier on every version of Win). Screenshot: I would like to know (1) can anyone else confirm my research and (2) BONUS: which font is IE displaying? Notes: IE6 and IE7 on Win XP displays Times New Roman, as they should. It doesn't matter if font-family: serif; is declared in an external stylesheet or inline on the element. Quoting the CSS attribute makes no difference. Adding "Unkown Font" to the stack also makes no difference. New Screenshot: The answer from Jukka below is correct. Here is a new screenshot with Batang (not BatangChe) to illustrate. Hope this helps someone.

    Read the article

  • How to configure SoapUI with client certificate authentication

    - by gvdmaaden
    SoapUI is one of the best free tools around to test web services. Some time ago I was trying to send a soap message towards a SSL web service that was set up for client certificate authentication. I pretty soon got stuck at the “javax.net.ssl.SSLException: HelloRequest followed by an unexpected handshake message” error, but after reading several posts on the internet I solved that issue. It’s not really that complicated after all, but since I could not find a decent place on the internet that explains this scenario in a proper way, here’s a list of steps that you need to do to make it work. Note: this following steps are based on a Windows environment   Step one: Export your certificate (the one that you want to use as the client certificate) using the export wizard with the private key and with all certificates in the certification path: Give it a password (anything you want): And export it as a PFX file to a location somewhere on disk: Step two: Install the newest version of SOAP UI (currently it is 3.6.1) Open the file C:\Program Files\eviware\soapUI-3.6.1\bin\ soapUI-3.6.1.vmoptions and add this line at the bottom: -Dsun.security.ssl.allowUnsafeRenegotiation=true This is needed because of a JAVA security feature in their newest frameworks (For further reading about this issue, read this: http://www.soapui.org/forum/viewtopic.php?t=4089 and this: http://java.sun.com/javase/javaseforbusiness/docs/TLSReadme.html).   Open SOAPUI and go to preferences>SSL Settings and configure your certificate in the keystore (use the same password as in step one): That should be it. Just create a new project and import the WSDL from the client authenticated SSL webservice: And now you should be able to send soap messages with client certificate authentication. The above steps worked for me, but please drop a note if it does not work for you.

    Read the article

  • MySQL - Installation

    - by Stuart Brierley
    In order to create a development environment for a project I am working on, I recently needed to install MySQL Server.  The first step was to download the msi. Running this presents you with the installer splash screen, detailing the version of MySQL that you are about to install - in this case MySQL Server 5.1. Next you can choose whether to install a Typical, Complete or Custom installation.  Although this is the first time I have installed MySQL and the Custom option states "Recommended for advanced users" I opted to carry out a Custom installation, specifically so that I could be sure of what features and components were installed. On the Custom Setup screen you can choose which components to install.  By default the Developer Components are not included, but I opted to include some of these elements. Next up is the ready to install screen and then the intsallation progress.   Following the completion of the installation you are shown a few screens with details of the MySQL Enterprise subscription option. Finally the installation is complete and you are offered the chance to configure and register MySQL. Next I will be looking at the configuration of MySQL Server 5.1.

    Read the article

  • Munq is for web, Unity is for Enterprise

    - by oazabir
    The Unity Application Block (Unity) is a lightweight extensible dependency injection container with support for constructor, property, and method call injection. It’s a great library for facilitating Inversion of Control and the recent version supports AOP as well. However, when it comes to performance, it’s CPU hungry. In fact it’s so CPU hungry that it makes it impossible to make it work at Internet Scale. I was investigating some CPU issue on a portal that gets around 3MM hits per day and I found unusually high CPU. Here’s why: I did some CPU profiling on my open source project Dropthings and found that the highest CPU is consumed by Unity’s Resolve<>(). There’s no funky use of Unity in the project. Straightforward Register<>() and Resolve<>(). But as you can see, Resolve<>() is consuming significantly high CPU even after the site is warm and has been running for a while. Then I tried Munq, which is a basic Dependency Injection Container. It has everything you will usually need in a regular project. It boasts to be the fastest DI out there. So, I converted all Unity code to Munq in Dropthings and did a CPU profile and Whala!   There’s no trace of any Munq calls anywhere. That proves Munq is a lot faster than Unity.

    Read the article

  • PHP remote development workflow: git, symfony and hudson

    - by user2022
    I'm looking to develop a website and all the work will be done remotely (no local dev server). The reason for this is that my shared hosting company a2hosting has a specific configuration (symfony,mysql,git) that I don't want to spend time duplicating when I can just ssh and develop remotely or through netbeans remote editing features. My question is how can I use git to separate my site into three areas: live, staging and dev. Here's my initial thought: public_html (live site and git repo) testing: a mirror of the site used for visual tests (full git repo) dev/ticket# : git branches of public_html used for features and bug fixes (full git repo) Version Control with git: Initial setup: cd public_html git init git add * git commit -m ‘initial commit of the site’ cd .. git clone public_html testing mkdir dev Development: cd /dev git clone ../testing ticket# all work is done in ./dev/ticket#, then visit www.domain.com/dev/ticket# to visually test make granular commits as necessary until dev is done git push origin master:ticket# if the above fails: merge latest testing state into current dev work: git merge origin/master then try the push again mark ticket# as ready for integration integration and deployment process: cd ../../testing git merge ticket# -m "integration test for ticket# --no-ff (check for conflicts ) run hudson tests visit www.domain.com/testing for visual test if all tests pass: if this ticket marks the end of a big dev sprint: make a snapshot with git tag git push --tags origin else git push origin cd ../public_html git checkout -f (live site should have the latest dev from ticket#) else: revert the merge: git checkout master~1; git commit -m "reverting ticket#" update ticket# that testing failed with the failure details Snapshots: Each major deployment sprint should have a standard name and should be tracked. Method: git tag Naming convention: TBD Reverting site to previous state If something goes wrong, then revert to previous snapshot and debug the issue in dev with a new ticket#. Once the bug is fixed, follow the deployment process again. My questions: Does this workflow make sense, if not, any recommendations Is my approach for reverting correct or is there a better way to say 'revert to before x commit'

    Read the article

  • Other games that employ mechanics like the game "Diplomacy"

    - by Kevin Peno
    I'm doing a little bit of research and I'm hoping you can help me track down any games, other than Diplomacy (online version here), that employ all or some of the mechanics in Diplomacy (rules, short form). Examples I'm looking for: Simultaneous orders given prior to execution of orders In Diplomacy, players "write down" their moves and execute them "at the same time" Support, in terms of supporting an attacker or defender "take" a territory. In Diplomacy, no one unit is stronger than another you need to combine the strength of multiple units to attack other territories. Rules for how move conflicts are resolved Example, 2 units move into a space, but only one is allowed, what happens. I may add to this list later, but these are the primary things I'm looking for. If you need clarification on anything just let me know. Note: I tried asking this on GamingSE, but it was shot down. So, I am unsure where else I could post this. Since I am researching this for game development purposes, I assume this post is on topic. Please let me know if this is not the case. Please also feel free to re-categorize this. Thanks!

    Read the article

  • BUILD 2012 day 1 Keynote recap

    - by pluginbaby
    On October 30, 2012 Steve Ballmer kicked off the first BUILD conference keynote. Steve shared some insights around Windows 8: 4 million customers upgraded to Windows 8 over the weekend since the October 26 release (so in 3 days only!). Focus on sharing code between Windows 8 and Windows Phone 8. Syncing everything through SkyDrive Xbox Music free streaming and Xbox Smart Glass. He did all the demos himself, showing off great “Windows 8 generation” devices already available (including an 82-inch Windows 8 “slate” by Perceptive Pixel). Steve Guggenheimer (Microsoft's Corporate Vice President DPE) talked about The Business Opportunity with Windows 8.   Notable announcements of day 1: The Windows Phone 8 SDK is now available at dev.windowsphone.com (includes SDK, free version of VS2012, Blend 5, and emulators). Release of the .NET Framework for Windows Phone 8: Ability to use C# 5 or Visual Basic 11 features in your code (async programming mode, ...), share code between WP8 and Windows Store apps. Windows Phone 8 individual developer registration is reduced to $8 for the next 8 days! (hurry up…) Note: strange absence of Steven Sinofsky on stage…   Watch the entire keynote online: http://channel9.msdn.com/Events/Build/2012/1-001 Read the full transcript: http://www.microsoft.com/en-us/news/Speeches/2012/10-30BuildDay1.aspx

    Read the article

  • JSR updates - November 2013

    - by Heather VanCura
     This week has been a busy week for JCP participants! Ten JSRs related to the upcoming Java Standard Edition (Java SE) 8 release posted public reviews--four Public Reviews and six Maintenance Reviews.  All JSRs are operating under the latest version of the JCP program and have public feedback mechanisms and issue trackers.  Please review and comment on these JSRs--your input and participation is wanted and needed!  JSR 308, Annotations on Java Types, published a Public Review. This review closes 4 December. JSR 310, Date and Time API, published a Public Review. This review closes 4 December. JSR 335, Lambda Expressions for the Java Programming Language, published a Public Review. This review closes 4 December. JSR 337, Java SE 8 Release contents, published a Public Review.  This review closes 4 December. JSR 221, JDBC 4.0 API, published a Maintenance Review.  This review closes 4 December. JSR 199, Java Compiler API, published a Maintenance Review.  This review closes 4 December. JSR 160, Java Management Extensions Remote API, published a Maintenance Review.  This review closes 4 December. JSR 114, JDBC Rowset Implementations, published a Maintenance Review.  This review closes 4 December. JSR 3, Java Management Extensions Specification, published a Maintenance Review.  This review closes 4 December. JSR 206, Java API for XML Processing,  published a Maintenance Review.  This review closes 22 November. Two other JSRs also published recent updates:  JSR 354, Money and Currency API, published a Public Review.  This review closes 23 November.  JSR 107, JCACHE - Java Temporary Caching API, published a Proposed Final Draft.

    Read the article

  • All New Oracle Linux Curriculum Now Available

    - by Antoinette O'Sullivan
    Develop your system administration skills with the all new Oracle Linux System Administration Curriculum. This curriculum includes key courses which will help you with any version of Linux: Unix and Linux Essentials: This 3 day course helps those new to Oracle Linux with the basic skills they need to interact comfortably and confidently with the operating system. Oracle Linux System Administration: This 5 day course teaches those who are comfortable with the basic skills how to: Install Oracle Linux Gain an understanding of the benefits of Oracle's Unbreakable Enterprise Kernel (UEK) Configure the kernel, install packages, and update the kernel of a running system Configure users and rights, create and manage file systems, configure networking, and manage system security Properly prepare a Linux environment for installation of Oracle Database. Both these hands-on instructor-led courses are available as: Live-Virtual Delivery: You can attend these classes from your desk, no travel necessary. In-Class Delivery: You can travel to a classroom to attend these classes across the world. Some events already on the schedule shown below.  Location  Date  Delivery Language  Unix and Linux Essentials      Johannesburg, South Africa  8 October 2012  English  Woodmead, South Africa  15 July 2013  English  Denver, Colorado, US  23 January 2013  English  Jakarta, Indonesia  13 November 2012  English  Singapore  22 October 2012  English  Sydney, Australia  4 February 2013  English  Brisbane, Australia  29 April 2013  English  Melbourne, Australia  29 January 2013  English  Oracle Linux System Administration      Gaborone, Botswana  22 April 2013  English  Vilvoorde, Belgium  15 October 2012  English  Melbourne, Australia  26 November 2012  English For more information on these classes or to express interest in additional events, go to http://oracle.com/education/linux  

    Read the article

  • Preventing battery from charging

    - by intuited
    I'm running on UPS power and would like to prevent the laptop's battery from charging, to increase the amount of power available to other devices. Is there a way to do this? update The machine is a Dell Latitude D400. If people want more details, just ask. Also, I'm gathering that I need to explain my desired setup a little better. I've gotten a bunch of suggestions about taking the battery out. I'm not sure if people are suggesting to take the battery out while the machine is running — this, as I understand, is not a good idea with most laptops — or to just remove the battery altogether. The latter option is not optimal, because ideally I'd like to use the 30-60 minutes of power in the laptop battery and then switch over to UPS power. The details of the switch-over may constitute a separate question, but if I can't find a way to keep the laptop battery from charging, then removing the battery from the machine altogether may be the best way to do this. I'm not sure yet if this machine will run without a battery, but I'll check that out. Other than the laptop, the UPS is just supporting a cable modem and router and a USB hub. Again in the idealized version of this setup, all the power management changes would be automated, i.e. not require replugging anything or pressing Fn-keys. I'd like the machine to start using laptop battery power when apcupsd indicates that the UPS A/C is out, and then start using UPS power, but not charging the battery, when the battery is almost depleted.

    Read the article

  • Anatomy of a .NET Assembly - PE Headers

    - by Simon Cooper
    Today, I'll be starting a look at what exactly is inside a .NET assembly - how the metadata and IL is stored, how Windows knows how to load it, and what all those bytes are actually doing. First of all, we need to understand the PE file format. PE files .NET assemblies are built on top of the PE (Portable Executable) file format that is used for all Windows executables and dlls, which itself is built on top of the MSDOS executable file format. The reason for this is that when .NET 1 was released, it wasn't a built-in part of the operating system like it is nowadays. Prior to Windows XP, .NET executables had to load like any other executable, had to execute native code to start the CLR to read & execute the rest of the file. However, starting with Windows XP, the operating system loader knows natively how to deal with .NET assemblies, rendering most of this legacy code & structure unnecessary. It still is part of the spec, and so is part of every .NET assembly. The result of this is that there are a lot of structure values in the assembly that simply aren't meaningful in a .NET assembly, as they refer to features that aren't needed. These are either set to zero or to certain pre-defined values, specified in the CLR spec. There are also several fields that specify the size of other datastructures in the file, which I will generally be glossing over in this initial post. Structure of a PE file Most of a PE file is split up into separate sections; each section stores different types of data. For instance, the .text section stores all the executable code; .rsrc stores unmanaged resources, .debug contains debugging information, and so on. Each section has a section header associated with it; this specifies whether the section is executable, read-only or read/write, whether it can be cached... When an exe or dll is loaded, each section can be mapped into a different location in memory as the OS loader sees fit. In order to reliably address a particular location within a file, most file offsets are specified using a Relative Virtual Address (RVA). This specifies the offset from the start of each section, rather than the offset within the executable file on disk, so the various sections can be moved around in memory without breaking anything. The mapping from RVA to file offset is done using the section headers, which specify the range of RVAs which are valid within that section. For example, if the .rsrc section header specifies that the base RVA is 0x4000, and the section starts at file offset 0xa00, then an RVA of 0x401d (offset 0x1d within the .rsrc section) corresponds to a file offset of 0xa1d. Because each section has its own base RVA, each valid RVA has a one-to-one mapping with a particular file offset. PE headers As I said above, most of the header information isn't relevant to .NET assemblies. To help show what's going on, I've created a diagram identifying all the various parts of the first 512 bytes of a .NET executable assembly. I've highlighted the relevant bytes that I will refer to in this post: Bear in mind that all numbers are stored in the assembly in little-endian format; the hex number 0x0123 will appear as 23 01 in the diagram. The first 64 bytes of every file is the DOS header. This starts with the magic number 'MZ' (0x4D, 0x5A in hex), identifying this file as an executable file of some sort (an .exe or .dll). Most of the rest of this header is zeroed out. The important part of this header is at offset 0x3C - this contains the file offset of the PE signature (0x80). Between the DOS header & PE signature is the DOS stub - this is a stub program that simply prints out 'This program cannot be run in DOS mode.\r\n' to the console. I will be having a closer look at this stub later on. The PE signature starts at offset 0x80, with the magic number 'PE\0\0' (0x50, 0x45, 0x00, 0x00), identifying this file as a PE executable, followed by the PE file header (also known as the COFF header). The relevant field in this header is in the last two bytes, and it specifies whether the file is an executable or a dll; bit 0x2000 is set for a dll. Next up is the PE standard fields, which start with a magic number of 0x010b for x86 and AnyCPU assemblies, and 0x20b for x64 assemblies. Most of the rest of the fields are to do with the CLR loader stub, which I will be covering in a later post. After the PE standard fields comes the NT-specific fields; again, most of these are not relevant for .NET assemblies. The one that is is the highlighted Subsystem field, and specifies if this is a GUI or console app - 0x20 for a GUI app, 0x30 for a console app. Data directories & section headers After the PE and COFF headers come the data directories; each directory specifies the RVA (first 4 bytes) and size (next 4 bytes) of various important parts of the executable. The only relevant ones are the 2nd (Import table), 13th (Import Address table), and 15th (CLI header). The Import and Import Address table are only used by the startup stub, so we will look at those later on. The 15th points to the CLI header, where the CLR-specific metadata begins. After the data directories comes the section headers; one for each section in the file. Each header starts with the section's ASCII name, null-padded to 8 bytes. Again, most of each header is irrelevant, but I've highlighted the base RVA and file offset in each header. In the diagram, you can see the following sections: .text: base RVA 0x2000, file offset 0x200 .rsrc: base RVA 0x4000, file offset 0xa00 .reloc: base RVA 0x6000, file offset 0x1000 The .text section contains all the CLR metadata and code, and so is by far the largest in .NET assemblies. The .rsrc section contains the data you see in the Details page in the right-click file properties page, but is otherwise unused. The .reloc section contains address relocations, which we will look at when we study the CLR startup stub. What about the CLR? As you can see, most of the first 512 bytes of an assembly are largely irrelevant to the CLR, and only a few bytes specify needed things like the bitness (AnyCPU/x86 or x64), whether this is an exe or dll, and the type of app this is. There are some bytes that I haven't covered that affect the layout of the file (eg. the file alignment, which determines where in a file each section can start). These values are pretty much constant in most .NET assemblies, and don't affect the CLR data directly. Conclusion To summarize, the important data in the first 512 bytes of a file is: DOS header. This contains a pointer to the PE signature. DOS stub, which we'll be looking at in a later post. PE signature PE file header (aka COFF header). This specifies whether the file is an exe or a dll. PE standard fields. This specifies whether the file is AnyCPU/32bit or 64bit. PE NT-specific fields. This specifies what type of app this is, if it is an app. Data directories. The 15th entry (at offset 0x168) contains the RVA and size of the CLI header inside the .text section. Section headers. These are used to map between RVA and file offset. The important one is .text, which is where all the CLR data is stored. In my next post, we'll start looking at the metadata used by the CLR directly, which is all inside the .text section.

    Read the article

< Previous Page | 956 957 958 959 960 961 962 963 964 965 966 967  | Next Page >