Search Results

Search found 15129 results on 606 pages for 'orientation changes'.

Page 121/606 | < Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >

  • Android Card Game Database for Deck Building

    - by Singularity222
    I am making a card game for Android where a player can choose from a selection of cards to build a deck that would contain around 60 cards. Currently, I have the entire database of cards created that the user can browse. The next step is allowing the user to select cards and create a deck with whatever cards they would like. I have a form where the user can search for specific cards based off a few different attributes. The search results are displayed in a List Activity. My thought about deck creation is to add the primary key of each card the user selects to a SQLite Database table with the amount they would like in the deck. This way as the user performs searches for cards they can see the state of the deck. Once the user decides to save the deck. I'll export the card list to XML and wipe the contents of the table. If the user wanted to make changes to the deck, they would load it, it would be parsed back into the table so they could make the changes. A similar situation would occur when the eventually load the deck to play a game. I'm just curious what the rest of you may think of this method. Currently, this is a personal project and I am the only one working on it. If I can figure out the best implementation before I even begin coding I'm hoping to save myself some time and trouble.

    Read the article

  • VirtualBox, physical disks, and snapshots

    - by noamtm
    I have an Ubuntu guest installed on a raw HDD (entire disk, not a partition). It runs just fine under WinXP. In addition, I was able to boot directly into it (using the BIOS boot-device selector). At some point, I have created a snapshot of the VM. I have made some changes since then. Today, after a long time, I tried booting into the "VM" disk. It worked, but all the data there is pre-snapshot (or, rather, frozen at the snapshot). Can I somehow merge the changes since I took the snapshot into the physical state?

    Read the article

  • 'cp' skips some of Eclipse's dot directories

    - by Dustin Digmann
    I am trying to backup my Eclipse .metadata directory. The command I run is: cp -Rf ~/some/where/.metadata/* ~/some/backup/.metadata/. The first time I tried this, the copy skipped the lock file and the .plugins and .mylyn directories. After doing some research, I found some threads mentioning permission changes. I applied the changes and found some success. Now, running the script will not create or traverse into the .plugins or .mylyn directories. Additional research has come up with zero results. I am using: Windows XP SP 3 Cygwin 1.7.1-1

    Read the article

  • How to setup an iTunes library to use between two Macs?

    - by stead1984
    As you can tell this is no where near work related. I have an iMac G5 where my itunes is currently hosted, I have also just got a new MacBook Pro. What I want to be able to do is sync my itunes library from my iMac to my MacBook Pro, that way it can be accessible away from my home network, then if I make any changes to the itunes library (like change a track name) it will sync these changes back once I connect back to the home network. My current itunes contains music, videos, podcasts, playlists and iPhone apps, I would also like iTunes to track play counts collectively between the iMac and the MacBook Pro.

    Read the article

  • Running an rsync sweep before initializing lsyncd for synchronizing instances on EC2

    - by chrisallenlane
    My company uses several EC2 servers that will scale up and down according to the load we're receiving on our sites at any given moment. For the sake of our discussion here, we're running four instances: master.ourdomain.com - the file syncing "hub" of the webservers www1/www2/www3.ourdomain.com - three webservers which turn on or off as dictated by load I'm using lsyncd to keep all of the webservers in sync, and for the most part, it's working quite well. We're using a two-way syncing scheme, such that each webserver syncs against master, and master syncs against each webserver. Thus, the webservers are kept in sync, even though they aren't syncing against each other directly. I'm having one problem that I'm having a hard time solving,though. It occurs under these circumstances: When changes are made on master (perhaps after we've pushed new code), while some of the redundant webservers are sleeping And then a sleeping webserver wakes-up to absorb load Under that circumstance, I would like the following to happen: First, the newly-awoken webserver should sync its file structure - one way - against master, to bring its web application code up-to-date. Then, and only then, should it begin pushing changes in its file structure back to master. Unfortunately, currently, when a sleeping server is started, when lsyncd starts up, it pushes changes back to master before updating its own codebase, thus overwriting new code with old. Thus, before lsyncd starts, I'd like to be able to synchronize the webservers code against master's, perhaps by running a simple one-way rsync against the two machines. We're running lsyncd v.2, and I've tried to make this happen by using the "bash" configuration options documented in the lsyncd manual. My configuration file looks like this: settings = { logfile = "/home/user/log/lsyncd/log.txt", statusFile = "/home/user/log/lsyncd/status.txt", maxProcesses = 2, nodaemon = false, } bash = { onStartup = "rsync [email protected]:/home/user/www /home/user/www" } sync{ default.rsyncssh, source="/home/user/www/", host="[email protected]", targetdir="/home/user/www/", rsyncOpts="-ltus", excludeFrom="/home/user/conf/lsyncd/exclude" } (I've obviously redacted that file somewhat to protect the identities of the guilty.) Simply put, though, this just isn't working. How else might I approach this problem? I was looking at the --delete-after option in man rsync, but I don't think that does what I'm looking for. Are there any suggestions about how I should approach this problem? Thanks for lending your time and expertise. Chris

    Read the article

  • PerlRegEx vs RegularExpressionsCore Delphi Units

    - by Jan Goyvaerts
    The RegularExpressionsCore unit that is part of Delphi XE is based on the latest class-based PerlRegEx unit that I developed. Embarcadero only made a few changes to the unit. These changes are insignificant enough that code written for earlier versions of Delphi using the class-based PerlRegEx unit will work just the same with Delphi XE. The unit was renamed from PerlRegEx to RegularExpressionsCore. When migrating your code to Delphi XE, you can choose whether you want to use the new RegularExpressionsCore unit or continue using the PerlRegEx unit in your application. All you need to change is which unit you add to the uses clause in your own units. Indentation and line breaks in the code were changed to match the style used in the Delphi RTL and VCL code. This does not change the code, but makes it harder to diff the two units. Literal strings in the unit were separated into their own unit called RegularExpressionsConsts. These strings are only used for error messages that indicate bugs in your code. If your code uses TPerlRegEx correctly then the user should not see any of these strings. My code uses assertions to check for out of bounds parameters, while Embarcadero uses exceptions. Again, if you use TPerlRegEx correctly, you should never get any assertions or exceptions. The Compile method raises an exception if the regular expression is invalid in both my original TPerlRegEx component and Embarcadero’s version. If your code allows the user to provide the regular expression, you should explicitly call Compile and catch any exceptions it raises so you can tell the user there is a problem with the regular expression. Even with user-provided regular expressions, you shouldn’t get any other assertions or exceptions if your code is correct. Note that Embarcadero owns all the rights to their RegularExpressionsCore unit. Like all the other RTL and VCL units, this unit cannot be distributed by myself or anyone other than Embarcadero. I do retain the rights to my original PerlRegEx unit which I will continue to make available for those using older versions of Delphi.

    Read the article

  • Resolution stuck in 640x480 in grub, 11.04 and 12.04

    - by user89797
    I have three operating systems on my machine, Windows 7x64, Ubuntu 11.10 and 12.04 both x64 as well. All three were running at full resolution for my monitor, as well as in the Grub 1.99 boot screen. After booting into Windows, I rebooted my machine and found my Grub resolution was suddenly 640x480. Booting into both versions of Ubuntu, I find myself stuck at that resolution as well. I made no driver changes recently, and hadn't even booted into the 11.10 build in a month or more. I've gone through both proprietary Nvidia driver options for my card (GeForce 9800GT) as well as the open source drivers in 12.04 to no avail. I can't figure out what could have caused this change in both versions of Ubuntu and Grub simultaneously. Windows 7 is unaffected so I think that safely rules out hardware failure. EDIT Ok, so I couldn't boot an graphical live disks, I tried ubuntu 12.04 i386 and x64 as well as 12.10 beta x64 and all of them would flash the initial logo, go to a blank screen with a flashing cursor in the upper left and then my display would die. I managed to boot 12.04 server and get into recovery. I reinstalled grub and went into recovery mode for my 12.04 build. If I boot in safe graphics mode I can get 1280x768, but as soon as I reboot it's broken again. I've tried reinstalling the nvidia drivers and that leaves me with a system stuck at max 640x480. None of these changes have had any impact on the 11.10 build, which is still stuck at 640x480 Given that I can push a somewhat higher resolution in 12.04, and full resolution in windows 7 I'm pretty convinced it's not an issue of my monitor failing. It must be something to do with the graphics drivers. I can't figure out what could be the issue though. I'm especially perplexed that I can't boot any live images

    Read the article

  • Oracle SOA Suite for healthcare integration Dashboard

    - by Nitesh Jain
    Oracle SOA Suite Healthcare came up with a new way of monitoring where user can configure a dashboard and follow the dynamic runtime changes.Oracle SOA Suite for healthcare integration dashboards display information about the current health of the endpoints in a healthcare integration application. You can create and configure multiple dashboards as needed to monitor the status and volume metrics for the endpoints you have defined. The Dashboards reflects changes that occur in the runtime repository, such as purging runtime instance data, new messages processed, and new error messages. You can display data for various time periods, and you can manually refresh the data in real time or set the dashboard to automatically refresh at set intervals.Dashboard shows the following information: Status: The current status of the endpoint, such as Running, Idle, Disabled, or Errors. Messages Sent: The number of messages sent by the endpoint in the specified time period. Messages Received: The number of messages received by the endpoint in the specified time period. Errors: The number of messages with errors for the endpoint in the given time period. Last Sent: The date and time the last message was sent from the endpoint. Last Received: The date and time the last message was received from the endpoint. Last Error: The date and time of the last error for the endpoint.  It also shows the detailed view of a specific Endpoint The document type. The number of messages received per second. The total number of message processed in the specified time period. The average size of each message.

    Read the article

  • How to setup ssh's umask for all type of connections

    - by Unode
    I've been searching for a way to setup OpenSSH's umask to 0027 in a consistent way across all connection types. By connection types I'm referring to: sftp scp ssh hostname ssh hostname program The difference between 3. and 4. is that the former starts a shell which usually reads the /etc/profile information while the latter doesn't. In addition by reading this post I've became aware of the -u option that is present in newer versions of OpenSSH. However this doesn't work. I must also add that /etc/profile now includes umask 0027. Going point by point: sftp - Setting -u 0027 in sshd_config as mentioned here, is not enough. If I don't set this parameter, sftp uses by default umask 0022. This means that if I have the two files: -rwxrwxrwx 1 user user 0 2011-01-29 02:04 execute -rw-rw-rw- 1 user user 0 2011-01-29 02:04 read-write When I use sftp to put them in the destination machine I actually get: -rwxr-xr-x 1 user user 0 2011-01-29 02:04 execute -rw-r--r-- 1 user user 0 2011-01-29 02:04 read-write However when I set -u 0027 on sshd_config of the destination machine I actually get: -rwxr--r-- 1 user user 0 2011-01-29 02:04 execute -rw-r--r-- 1 user user 0 2011-01-29 02:04 read-write which is not expected, since it should actually be: -rwxr-x--- 1 user user 0 2011-01-29 02:04 execute -rw-r----- 1 user user 0 2011-01-29 02:04 read-write Anyone understands why this happens? scp - Independently of what is setup for sftp, permissions are always umask 0022. I currently have no idea how to alter this. ssh hostname - no problem here since the shell reads /etc/profile by default which means umask 0027 in the current setup. ssh hostname program - same situation as scp. In sum, setting umask on sftp alters the result but not as it should, ssh hostname works as expected reading /etc/profile and both scp and ssh hostname program seem to have umask 0022 hardcoded somewhere. Any insight on any of the above points is welcome. EDIT: I would like to avoid patches that require manually compiling openssh. The system is running Ubuntu Server 10.04.01 (lucid) LTS with openssh packages from maverick. Answer: As indicated by poige, using pam_umask did the trick. The exact changes were: Lines added to /etc/pam.d/sshd: # Setting UMASK for all ssh based connections (ssh, sftp, scp) session optional pam_umask.so umask=0027 Also, in order to affect all login shells regardless of if they source /etc/profile or not, the same lines were also added to /etc/pam.d/login. EDIT: After some of the comments I retested this issue. At least in Ubuntu (where I tested) it seems that if the user has a different umask set in their shell's init files (.bashrc, .zshrc,...), the PAM umask is ignored and the user defined umask used instead. Changes in /etc/profile did't affect the outcome unless the user explicitly sources those changes in the init files. It is unclear at this point if this behavior happens in all distros.

    Read the article

  • Windows 8: SL and HTML

    - by xamlnotes
    I  was just pointed to comment on my friend Andrew Brust’s blog about Silverlight versus HTML 5. Andrews blog is here: http://geekswithblogs.net/andrewbrust/archive/2011/11/23/windows-8-will-be-here-tomorrow-but-should-silverlight-be.aspx#600915 You can get another idea from another friend of mine Billy Hollis here: http://geekswithblogs.net/jalexander/archive/2011/04/09/the-eternal-battle-rich-v.-reachhellip--guest-blogger-billy-hollis.aspx The commenter is raving about HTML 5 and how that’s the future and SL is not. Well, my reaction is “hogwash”. Sure, HTML 5 is important and does some interesting stuff. Checkout what Bing.com is doing with it on some days and you can see. But to say that XAML is dead is nuts. I have been wrapping up bugs on a cross browser version of an application for awhile now. Whats the state of cross browser today? Well, better than a few years ago but far from perfect.  Each browser vendor interprets the specs in a little different way and you must account for them. The worst offender for major browsers? Apple and its Safari.  I had to make more changes for it than any other. Whats that got to do with XAML and SL/WPF?  Well, you write your SL code once and it runs in all browsers that support it, no changes. ipad does not? Well, they should be taken to court and forced too just like MS and others have been in the past for locking out competitors. Line of business applications? Write them in SL or WPF or both.  Use the power of XAML witch far out reaches html in any flavor and move on. We do need HTML 5 but its not a panacea nor will it replace all other technologies.

    Read the article

  • How would one run a task sequence within a task sequence in SCCM 2012 SP1

    - by BigHomie
    A Shining Example: Inside all of my task sequences I have a group that installs driver packages conditionally based on computer model: And of course, this list does nothing but grow. The fact that it grows isn't a big deal, what is a big deal is that every time it changes I have to manually copy and paste those changes across every task sequence I have, which of course leaves huge room for human error. The same goes for other groups of tasks that are common across task sequences. Looking for a solution where I could centrally manage these tasks, be it link other task sequences to a group within another task sequence, or create a separate task sequence and link to that. I came across a solution by John Marcum (SCCM MVP) that mentioned this ability, but this was a while ago and I can't find the link to it anymore to see if it's even still being updated/maintained, but I'm looking for more of a free solution, or even using Powershell or the ConfigMgr SDK is fine with me, I'm no stranger to either. Update Getting close: http://msdn.microsoft.com/en-us/library/jj217869.aspx

    Read the article

  • Inheritance vs containment while extending a large legacy project

    - by Flot2011
    I have got a legacy Java project with a lot of code. The code uses MVC pattern and is well structured and well written. It also has a lot of unit tests and it is still actively maintained (bug fixing, minor features adding). Therefore I want to preserve the original structure and code style as much as possible. The new feature I am going to add is a conceptual one, so I have to make my changes all over the code. In order to minimize changes I decided not to extend existing classes but to use containment: class ExistingClass { // .... existing code // my code adding new functionality private ExistingClassExtension extension = new ExistingClassExtension(); public ExistingClassExtension getExtension() {return extension;} } ... // somewhere in code ExistingClass instance = new ExistingClass(); ... // when I need a new functionality instance.getExtension().newMethod1(); All functionality that I am adding is inside a new ExistingClassExtension class. Actually I am adding only these 2 lines to each class that needs to be extended. By doing so I also do not need to instantiate new, extended classes all over the code and I may use existing tests to make sure there is no regression. However my colleagues argue that in this situation doing so isn't a proper OOP approach, and I need to inherit from ExistingClass in order to add a new functionality. What do you think? I am aware of numerous inheritance/containment questions here, but I think my question is different.

    Read the article

  • How to color highlight text in a black/white document easily

    - by Lateron
    I am editing a 96 chapter book. The text is in normal black letters on a white background. What I want to be able to do is: I want to have any changes or additions automatically shown in another (per-selected) color. Without having to a)high lite a word or phrase to be changed or added and then b)going to the toolbar and clicking on the font color In other words I want the original color of the text to remain as it is and any additions or changes to be visible in another color without having to use the toolbar. Can this be done? I use OpenOffice or Word 2007 in Windows 7.

    Read the article

  • I want to version control my entire slice

    - by Tom
    I'm renting a slice (i.e., a VPS) from Slicehost. I've a spent a day or two filling up /usr with my favorite packages, /etc with configs and init scripts, and so on. Now I want to: save this whole setup somewhere (e.g., to load onto another machine). see what changes I've made to which files revert changes, tag revisions, and all that other good version control stuff Saving a disk image gives me (1), but not (2) and (3). Using Subversion (svn import / svn://someotherhost) might give me all three, but I expect problems if I actually try to check a project out into / and maintain .svn directories in root-owned areas. And to load my setup onto a fresh slice, I'd need to install an svn client on it first. Is there a good way to do what I want to do?

    Read the article

  • "unresolvable problem" error when upgrading from 12.04 to 14.04

    - by flyingfisch
    So I have solved this issue, but now I have another problem: An unresolvable problem occurred while calculating the upgrade. This can be caused by: * Upgrading to a pre-release version of Ubuntu * Running the current pre-release version of Ubuntu * Unofficial software packages not provided by Ubuntu If none of this applies, then please report this bug using the command 'ubuntu-bug ubuntu-release-upgrader-core' in a terminal. I am not upgrading to a pre-release version of Ubuntu and I am not running a pre-release either. I have unchecked all my 3rd-party packages using Ubuntu Software Manager, EditSoftware Sources... What else might be wrong? UPDATE After doing sudo update-manager -d and sudo apt-get update;sudo apt-get dist-upgrade as per JimB's post, and then running sudo do-release-upgrade, here what I get: Err http://extras.ubuntu.com trusty/main Translation-en Err http://extras.ubuntu.com trusty/main Translation-en_US Err http://extras.ubuntu.com trusty/main Translation-en Ign http://extras.ubuntu.com trusty/main Translation-en_US Ign http://extras.ubuntu.com trusty/main Translation-en Fetched 0 B in 0s (0 B/s) Checking package manager Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Calculating the changes Calculating the changes Could not calculate the upgrade An unresolvable problem occurred while calculating the upgrade. This can be caused by: * Upgrading to a pre-release version of Ubuntu * Running the current pre-release version of Ubuntu * Unofficial software packages not provided by Ubuntu If none of this applies, then please report this bug using the command 'ubuntu-bug ubuntu-release-upgrader-core' in a terminal. Restoring original system state Aborting Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done === Command detached from window (Mon Aug 18 23:53:10 2014) === === Command terminated with exit status 1 (Mon Aug 18 23:53:10 2014) ===

    Read the article

  • Windows 7: "localhost name resolution is handled within DNS itself". Why?

    - by Portman
    After 18 years of hosts files on Windows, I was surprised to see this in Windows 7 build 7100: # localhost name resolution is handled within DNS itself. # 127.0.0.1 localhost # ::1 localhost Does anyone know why this change was introduced? I'm sure there has to be some kind reasoning. And, perhaps more relevantly, are there any other important DNS-related changes in Windows 7? It scares me a little bit to think that something as fundamental as localhost name resolution has changed... makes me think there are other subtle but important changes to the DNS stack in Win7.

    Read the article

  • Automatic syncing USB HD drive to local HD (different computers)

    - by luri
    I have a desktop PC at work, and a laptop at home.... or elsewhere. What I want to do is use a USB HD to store my documents (about 130 Gb, maybe more). That would serve as backup and also port my files to my laptop. I'd like any of both computers to automatically sync all files there with local copies, so that I can work at either of them and keep updated copies of everything in both (plus the USB drive, which would allow me to work in other computers, too, apart from being another backup). Dropbox ins't a solution for me, due to pricing and 100Gb limit. The workflow would be as follows, to clear things up: I work on PC1. Changes in files are automatically synced to USB whenever a file is modified. I go home and boot PC2. I plug the usb drive and local files are synced (if changed) with the most recent usb copies. While I work at PC2, again, changes in files spread to my USB drive. Whenever I go to PC1 again, I plug my usb and again everything synces. So the questions would be: a) Am I crazy? b) Can it be done? c) Will I have any file conflicts (provided I'm the only one that will modify the files)?

    Read the article

  • Adventures in Windows 8: Working around the navigation animation issues in LayoutAwarePage

    - by Laurent Bugnion
    LayoutAwarePage is a pretty cool add-on to Windows 8 apps, which facilitates greatly the implementation of orientation-aware (portrait, landscape) as well as state-aware (snapped, filled, fullscreen) apps. It has however a few issues that are obvious when you use transformed elements on your page. Adding a LayoutAwarePage to your application If you start with a blank app, the MainPage is a vanilla Page, with no such feature. In order to have a LayoutAwarePage into your app, you need to add this class (and a few helpers) with the following operation: Right click on the Solution and select Add, New Item from the context menu. From the dialog, select a Basic Page (not a Blank Page, which is another vanilla page). If you prefer, you can also use Split Page, Items Page, Item Detail Page, Grouped Items Page or Group Detail Page which are all LayoutAwarePages. Personally I like to start with a Basic Page, which gives me more creative freedom. Adding this new page will cause Visual Studio to show a prompt asking you for permission to add additional helper files to the Common folder. One of these helpers in the LayoutAwarePage class, which is where the magic happens. LayoutAwarePage offers some help for the detection of orientation and state (which makes it a pleasure to design for all these scenarios in Blend, by the way) as well as storage for the navigation state (more about that in a future article). Issue with LayoutAwarePage When you use UI elements such as a background picture, a watermark label, logos, etc, it is quite common to do a few things with those: Making them partially transparent (this is especially true for background pictures; for instance I really like a black Page background with a half transparent picture placed on top of it). Transforming them, for instance rotating them a bit, scaling them, etc. Here is an example with a picture of my two beautiful daughters in the Bird Park in Kuala Lumpur, as well as a transformed TextBlock. The image has an opacity of 40% and the TextBlock a simple RotateTransform. If I create an application with a MainPage that navigates to this LayoutAwarePage, however, I will have a very annoying effect: The background picture appears with an Opacity of 100%. The TextBlock is not rotated. This lasts only for less than a second (during the navigation animation) before the elements “snap into place” and get their desired effect. Here is the XAML that cause the annoying effect: <common:LayoutAwarePage x:Name="pageRoot" x:Class="App13.BasicPage1" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:common="using:App13.Common" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d"> <Grid Style="{StaticResource LayoutRootStyle}"> <Grid.RowDefinitions> <RowDefinition Height="140" /> <RowDefinition Height="*" /> </Grid.RowDefinitions> <Image Source="Assets/el20120812025.jpg" Stretch="UniformToFill" Opacity="0.4" Grid.RowSpan="2" /> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" /> <ColumnDefinition Width="*" /> </Grid.ColumnDefinitions> <Button x:Name="backButton" Click="GoBack" IsEnabled="{Binding Frame.CanGoBack, ElementName=pageRoot}" Style="{StaticResource BackButtonStyle}" /> <TextBlock x:Name="pageTitle" Grid.Column="1" Text="Welcome" Style="{StaticResource PageHeaderTextStyle}" /> </Grid> <TextBlock HorizontalAlignment="Center" TextWrapping="Wrap" Text="Welcome to my Windows 8 Application" Grid.Row="1" VerticalAlignment="Bottom" FontFamily="Segoe UI Light" FontSize="70" FontWeight="Light" TextAlignment="Center" Foreground="#FFFFA200" RenderTransformOrigin="0.5,0.5" UseLayoutRounding="False" d:LayoutRounding="Auto" Margin="0,0,0,153"> <TextBlock.RenderTransform> <CompositeTransform Rotation="-6.545" /> </TextBlock.RenderTransform> </TextBlock> <VisualStateManager.VisualStateGroups> [...] </VisualStateManager.VisualStateGroups> </Grid> </common:LayoutAwarePage> Solving the issue In order to solve this “snapping” issue, the solution is to wrap the elements that are transformed into an empty Grid. Honestly, to me it sounds like a bug in the LayoutAwarePage navigation animation, but thankfully the workaround is not that difficult: Simple change the main Grid as follows: <Grid Style="{StaticResource LayoutRootStyle}"> <Grid.RowDefinitions> <RowDefinition Height="140" /> <RowDefinition Height="*" /> </Grid.RowDefinitions> <Grid Grid.RowSpan="2"> <Image Source="Assets/el20120812025.jpg" Stretch="UniformToFill" Opacity="0.4" /> </Grid> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" /> <ColumnDefinition Width="*" /> </Grid.ColumnDefinitions> <Button x:Name="backButton" Click="GoBack" IsEnabled="{Binding Frame.CanGoBack, ElementName=pageRoot}" Style="{StaticResource BackButtonStyle}" /> <TextBlock x:Name="pageTitle" Grid.Column="1" Text="Welcome" Style="{StaticResource PageHeaderTextStyle}" /> </Grid> <Grid Grid.Row="1"> <TextBlock HorizontalAlignment="Center" TextWrapping="Wrap" Text="Welcome to my Windows 8 Application" VerticalAlignment="Bottom" FontFamily="Segoe UI Light" FontSize="70" FontWeight="Light" TextAlignment="Center" Foreground="#FFFFA200" RenderTransformOrigin="0.5,0.5" UseLayoutRounding="False" d:LayoutRounding="Auto" Margin="0,0,0,153"> <TextBlock.RenderTransform> <CompositeTransform Rotation="-6.545" /> </TextBlock.RenderTransform> </TextBlock> </Grid> <VisualStateManager.VisualStateGroups> [...] </Grid> Hopefully this will help a few people, I banged my head on the wall for a while before someone at Microsoft pointed me to the solution ;) Happy coding, Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Frustrated with MythTV 0.26

    - by Mike
    I've been using MythTV for a while now. Until a hardware crash I had a 0.25 box running without problems. Had to get new hardware and am now in the process of setting up 0.26. Every time I pick a menu option, it hangs for 1 minute. Every time I try to start the backend, same thing. I pick a new theme in the frontend, but it never gets used. I try to test audio, but all I get is static (from the proper channels though). I've setup the storage groups in the backend and put a video in /storage/videos but the front end won't see it when I scan for changes. I make changes in the front end configuration and they don't get saved (or get lost randomly 2-3 reloads later). Obviously there is something I am doing catastrophically wrong, but I have no idea what. Are the storage groups not working yet? Maybe I need to delete all the storage group entries and just use the front end override to set the path? I'm currently using lvm across 3 hard disks, and this has worked well for me in the past. I'd like to use storage groups, but frankly I don't see them working yet at all - especially not for videos (which is what we watch 99% of the time). Anyone have any suggestions for me to try before I just call 0.26 bad names and wipe the system?

    Read the article

  • Mutliple VMs for Tomcat cluster vs Multiple Tomcat instances on one physical box

    - by Greymeister
    I'm working on a project that will be implemented into production using a cluster of Apache Tomcat instances and I'm looking for the best Hardware/OS solutions and VMs have come up as one option. I have run ESXi/ESX instances before for development and testing, but I'm curious for a hosting environment if having multiple VMs is actually worse than just configuring a server to host multiple instances of Tomcat. These are my guesses: Pros for VMWare Easier Maintenance/Backup for individual VMs (VMWare makes this easy) Can remote login to individual VMs without having to give host access (security?) Easier way to re-purpose machine for OS/Hardware changes Pros for running on one Physical Machine Overhead of only one OS (also no VMWare footprint) Update OS/security changes once One less administrative layer (No VM expertise required) I'm curious if anyone has any other ideas about what the benefits would be for either option.

    Read the article

  • Valid reason for employer to breach freelance contract

    - by Costas
    Please don't close this as offtopic. According to the FAQ I can post programming related questions. I was working on a project and when it was half way completed (1 weeks work), the employer backs out and refuses to pay me. Shortly before this he was being very rude. He was having problems configuring the server and he told me it was my fault and that I had to fix it. After I spent several hours trying to figure out the problem, it turned out to be his fault. After this when I put the code on the server. He found 1 bug that I had missed. He freaked out, accused me of being a bad programmer and told me that the code was shit and that he couldn't use it. He said that if there is a bug in the code, that means the code is bad and he can't use it. He would have to throw the code away and hire someone else. His kept reiterating his argument: "why should I pay for code that I can't use". And I kept telling him the code was fine and urged him to have another programmer give him a second opinion. But he would have none of that. He said he would compensate me for my troubles by paying me 250$. Then he changes his mind and lowers that to 200$. Then a third time he changes his mind and says he doesn't want to compensate me at all. I'm left frustrated because besides being rude, he did not at any time tell me he was unhappy with the work that I was doing. So my question is; Is the above a valid reason to back out of a verbal contract in your opinion?

    Read the article

  • My computer will not reboot after fresh install of ubuntu 12.04LTS

    - by user170715
    I bought a new computer yesterday and it came with Windows 8. When installing Ubuntu, i choose the erase and install option thinking that Ubuntu would install easily like it did for my old laptop... After a successful install and following the instructions telling me to reboot to finish installation and remove installation media. It worked and my computer booted fine, however once I began installing updates via update manager and activating additional driver {ATI/AMD proprietary FGLRX graphics driver (post-release updates)} out of the following: Experimental AMD binary Xorg driver and kernel module ATI/AMD proprietary FGLRX graphics driver (*experimental*beta) ATI/AMD proprietary FGLRX graphics driver (post-release updates) Then reboot to finish making changes I reboot and get an error (Reboot and select proper boot device) At this point I was stuck, so I eventually reinstalled ubuntu and repeated the exacted same steps until right before i rebooted to finish making changes. However this time i used this Boot Repair tool sudo add-apt-repository ppa:yannubuntu/boot-repair sudo apt-get update sudo apt-get install -y boot-repair boot-repair After running the program i get a "boot successfully repaired" message. Then I try to reboot again and get the GNU Grub screen where it says would you like to boot: normal recovery memorytest Once it begins loading, you see the code moving across the screen then it pauses when it gets to and doesnt do anything. If someone could tell me how to fix this or get Windows 8 back soon, I'd appreciate it because like i said i just bought it yesterday and now i cant even use it.

    Read the article

  • Releasing software/Using Continuous Integration - What do most companies seem to use?

    - by Sagar
    I've set up our continuous integration system, and it has been working for about a year now. We have finally reached a point where we want to do releases using the same. Before our CI system, the process(es) that was used was: (Develop) -> Ready for release -> Create a branch -> (Build -> Fix bugs as QA finds them) Loop -> Final build -> Tag (Develop) -> Ready for release -> (build -> fix bugs) Loop -> Tag Our CI setup: 1 server for development (DEV) 1 server for qa/release (QA) The second one has integrated into CI perfectly. I create a branch when the software is ready for release, and the branch never changes thereafter, which means the build is reproduceable without having to change the CI job. Any future development takes place on HEAD, and even maintainence releases get a completely new branch and a completely new job, which remains on the CI system forever, and then some. The first method is harder to adapt. If the branch changes, the build is not reproduceable unless I use the tag to build [jobs on the CI server uses the branch for QA/RELEASE, and HEAD for development builds]. However, if I use the tag to build, I have to create a new CI job to build from the tag (lose changelog on server), or change the existing job (lose original job configuration). I know this sounds complicated, and if required, I will rewrite/edit to explain the situation better. However, my question: [If at all] what process does your company use to release software using continuous integration systems. Is it even done using the CI system, or manually?

    Read the article

  • Update the remote of a git branch after name changing

    - by Dror
    Consider the following situation. A remote repository has two branches master and b1. In addition it has two clones repo1 and repo2 and both have b1 checked out. At some point, in repo1 the name of b1 was changed. As far as I can tell, the following is the right procedure to change the name of b1: $ git branch b1 b2 # changes the name of b1 to b2 $ git push remote :b1 # delete b1 remotely $ git push --set-upstream origin b2 # create b2 remotely and direct the local branch to track the remote 1 Now, afterwards, in repo2 I face a problem. git pull doesn't pull the changes from the branch (which is now called remotely b2). The error returned is: Your configuration specifies to merge with the ref 'b1' from the remote, but no such ref was fetched. What is the right way to do this? Both the renaming part and the updating in other clones?

    Read the article

  • Is a "model" branch a common practice?

    - by dukeofgaming
    I just thought it could be a good thing to have a dedicated version control branch for all database schema changes and I wanted to know if anyone else is doing the same and what have the results been. Say that you are working with: Schema model/documentation (some file where you model the database visually to generate the schema source, say MySQL Workbench, with a .mwb file, which is binary) Schema source (a .sql file) Schema-based code generation The normal way we were working was with feature branches, so we would do changes to the model files (the database specific ones), and then have to regenerate points 2 and 3, dealing with the possible conflicts (or even code rewriting). Now say that your workflow goes the same way as the previous item numbering. With a model branch you wouldn't have to reconcile the schema model with binaries in other feature branches, or have to regenerate schema source and regenerate code (which might have human code on top of it). It makes so much sense to me it feels weird not having seen this earlier as a common practice. Edit: I'm counting on branch merges to be the assertions for the model matching the code. I use a DVCS, so I don't fear long-lived branches or scary-looking merges. I'm also doing feature branching.

    Read the article

< Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >