Search Results

Search found 32814 results on 1313 pages for 'change notification'.

Page 333/1313 | < Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >

  • Errors in ~/.xsession-errors

    - by Kuberan Naganathan
    I'm getting errors in ~/.xession-errors. I'm running ubuntu 12.04 Many apps fail to run without mention of problems in the .xsession-errors file. I looked around and tried to resolve issues myself but failed so far. I have to say it's possible that the issue is related to me mounting /home on another partition. (I say possibly because stuff worked ok for a while.) Fortunately my .xsession-errors file is small enough to post here. Thanks in advance for the help: gnome-keyring-daemon: insufficient process capabilities, unsecure memory might get used gnome-keyring-daemon: insufficient process capabilities, unsecure memory might get used gnome-keyring-daemon: insufficient process capabilities, unsecure memory might get used gnome-keyring-daemon: insufficient process capabilities, unsecure memory might get used Backend : gconf Integration : true Profile : unity Adding plugins Initializing core options...done (gnome-settings-daemon:2547): color-plugin-WARNING **: failed to get edid: unable to get EDID for output (gnome-settings-daemon:2547): color-plugin-WARNING **: unable to get EDID for xrandr-default: unable to get EDID for output (gnome-settings-daemon:2547): color-plugin-WARNING **: failed to reset xrandr-default gamma tables: gamma size is zero Initializing composite options...done Initializing opengl options...done Initializing decor options...done ** Message: applet now removed from the notification area Initializing vpswitch options...done Initializing snap options...done Initializing mousepoll options...done Initializing resize options...done Initializing place options...done Initializing move options...done Initializing wall options...done Initializing grid options...done I/O warning : failed to load external entity "/home/kuberan/.compiz/session/10754cf696d335e98e13471376531156900000024960034" Initializing session options...done Initializing gnomecompat options...done Initializing animation options...done Initializing fade options...done Initializing unitymtgrabhandles options...done Initializing workarounds options...done Initializing scale options...done compiz (expo) - Warn: failed to bind image to texture Initializing expo options...done Initializing ezoom options...done ** Message: using fallback from indicator to GtkStatusIcon (compiz:2560): GConf-CRITICAL **: gconf_client_add_dir: assertion `gconf_valid_key (dirname, NULL)' failed Initializing unityshell options...done Setting Update "main_menu_key" Setting Update "run_key" Setting Update "icon_size" ** Message: moving back from GtkStatusIcon to indicator

    Read the article

  • I'm a SubVersion geek, why I should consider or not consider Mercurial or Git or any other DRCS?

    - by Pierre 303
    I tried to understand the benefits of DRCS. I must recognize I still doesn't get it. Here are my current beliefs. I'm ready to destroy them thanks to your expertise. I know I'm probably resisting to change. I just want to evaluate how much that change will cost me. Merging hell can be solved by just applying good practices such as continuous integration. There is no such good practice than having a private branch for a few days when you are in a self managing team with real collaboration. I use branching for that for very rare cases, and I keep a branch for every major version, in which I fix bugs merged from the trunk. I see the value of committing offline then pushing online. But continuous integration can help on this too. I work on very large projects, and I never noticed SubVersion to be slow even when the server is 5000km away on the internet and my small connection (less than 1024D/128U). Harddisk space is cheap, so having a copy of source code locally doesn't look like a problem to me. I already have a full copy of the last version on my disk. I don't understand the distributed thing there (maybe THIS IS the key to my understanding?) I not new in the industry, and judging by my difficulty to understand, I don't think DRCS are easier to understand than SubVersion like. If fact, I don't understand... Doctor, give me your diagnostic.

    Read the article

  • Single CAS web application in a cluster

    - by Dolf Dijkstra
    Recently a customer wanted to set up a cluster of CAS nodes to be used together with WebCenter Sites. In the process of setting this up they realized that they needed to create a web application per managed server. They did not want to have this management burden but would like to have one web application deployed to multiple nodes. The reason that there is a need for a unique application per node is that the web-application contains information that needs to be unique per node, the postfix for the ticket id.  My customer would like to externalize the node specific configuration to either a specific classpath per managed server or to system properties set at startup.It turns out that the postfix for ticket ids is managed through a property host.name and that this property can be externalized.The host.name property is used in: /webapps/cas/WEB-INF/spring-configuration/uniqueIdGenerators.xmlIt is set in /webapps/cas/WEB-INF/spring-configuration/propertyFileConfigurer.xmlin a PropertyPlaceholderConfigurer.The documentation for PropertyPlaceholderConfigurer:http://static.springsource.org/spring/docs/2.0.x/api/org/springframework/beans/factory/config/PropertyPlaceholderConfigurer.htmlThis indicates that the properties defined through the PropertyPlaceHolderConfigurer can be externalized.To enable this externalization you would need to change host.properties so it is generic for all the managed servers and thus can be reused for all the managed servers: host.name=${cluster.node.id}Next step is to change the startup scripts for the managed servers and add a system property for -Dcluster.node.id=<something unique and stable>.Viola, the postfix is externalized and the web application can be shared amongst the cluster nodes.

    Read the article

  • How-To: Run CMSDK against a RAC cluster

    - by frank.closheim
    Using CMSDK in a production environment often requires a robust, reliable and failover enabled repository. When using Oracle Real Application Cluster (RAC) with your CMSDK repository you need to have a specific configuration in place to support such a setup. This post will explain the configuration steps required when running CMSDK 9.0.4.6 with Oracle WebLogic Server (WLS).In the previous CMSDK 9.0.4.2 version a RAC enabled connect string looked like this: (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rac1)(PORT = 1521))(ADDRESS = (PROTOCOL = TCP)(HOST = rac2)(PORT = 1521))(LOAD_BALANCE = NO)(FAILOVER = ON)(CONNECT_DATA =(SERVICE_NAME = rac)(failover_mode = (type=select)(method=basic)))CMSDK 9.0.4.6 makes use of data sources to connect to the underlying database. These data sources are configured inside your Application Server, such as Oracle WebLogic Server.In Oracle WebLogic Server 10.3.4, a single data source implementation has been introduced to support an RAC cluster. It responds to Fast Application Notification (FAN) events to provide Fast Connection Failover (FCF), Runtime Connection Load-Balancing (RCLB), and RAC instance graceful shutdown. XA affinity is supported at the global transaction Id level. The new feature is called WebLogic Active GridLink for RAC; which is implemented as the GridLink data source within WebLogic Server.This GridLink data source also works with Oracle Single Client Access Name (SCAN). SCAN is a feature used in RAC environments that provides a single name for clients to access any Oracle Database running in a cluster. You can think of SCAN as a cluster alias for databases in the cluster. The benefit is that the client’s connect information does not need to change if you add or remove nodes or databases in the cluster.The CMSDK 9.0.4.6 documentation describes how to create a regular JDBC data source named jdbc/OracleDS. Please refer to the following document which describes in detail how to create a GridLink data source in WLS.

    Read the article

  • Best way: restructure an existing Team Foundation Server (TFS) solution

    - by dhh
    In my department we are developing several smaller AddOns for some unified communication server. For versioning and distributed development we use a Team Foundation Server 2012. But: there is only one large TFS solution for all of our applications and libraries: Main Solution Applications App 1 App 2 App 3 Externals Libraries Lib 1 Lib 2 Tools The "Application" path contains all main applications. Those are not depending on each other, but they depend on the Libraries and Externals projects. The "Externals" path contains some external DLLs referenced in our Applications and Libraries. The Libraries path contains commonly used libs (UI templates, Helper classes, etc.). They do not depend on each other and they are referenced in the Libraries and the Tools projects. The Tools path contains some helper programs like setup helpers, update web services, etc. Now, there's some major points why I'd like to change this structure: We can't use server builds. It's uncomfortable to manage TFS scrum management with sprints, impediments, etc. with a solution structure like that. Every developer always has access to all projects in the solution. A complete build lasts too long if one accidentally hits [F6] in Visual Studio... What would you change in this solution? How would you break those projects into smaller Solutions, how should those solutions be structured. My first approach would be, to create one TFS project for each Application, Library and Tool. But how can I ensure that e.g. App 2 always contains the newest version of Lib 1? Do I have to monitor changes on Lib 1 and update App 2 manually as soon as the Lib changes? Or can I somehow force Visual Studio to always use the newest version of an external project somehow?

    Read the article

  • Leadership Tip&ndash;Vent Up!

    - by D'Arcy Lussier
    Leadership is difficult, for many reasons. One of those reasons is that we not only need to keep ourselves motivated when difficult or challenging times come, but we also need to motivate our teams and keep them focussed on the tasks at hand regardless of the mortars being rained down around them. Inexperienced (and experienced) leaders can fall into the “me-too” mentality – that is, the leader sees themselves as part of the team member instead of the leader of the team. Once a leader changes the teams view that he/she is a peer and not the leader, dynamics can change on the team. One of the biggest dangers is that the leader starts sharing frustrations, fears, concerns, etc. with the team that they’re supposed to be leading on to victory. This can destroy a team’s morale and productivity. One simple thing you can do to counter this is remember this rule when it comes to venting: Vent Up! Don’t vent sideways or down, vent up. Vent to the people above you – they’re the ones that tend to have the power to actually change things anyway. You as a leader stay healthy by getting your frustrations and concerns off your chest, your team is still insulated from it, and your superiors are aware of issues that need to be addressed or can coach you through the obstacles. D

    Read the article

  • Life Is Full Of Changes (Part 1)

    - by Brian Jackett
    Today will be my last day with Sogeti.  I’ve been with Sogeti USA for just over 4 years.  In that time I’ve gotten to work on some great projects, develop relationships with some brilliant and passionate people, participate in the .Net developer and SharePoint communities, and grow my skills in a number of areas I’m passionate about.     As with all good things they must come to an end though.  I’ve accepted a position with another company and will provide more details once the transition has completed.  This decision was a difficult one to make but it provides a great career opportunity on many levels.  As much as my new schedule allows I plan to continue participating in local user groups, speaking at conferences, and blogging.     Speaking of which, you may have noticed my reduced blogging activity in the past few months.  In addition to a career change I’m also in the process of moving to a new residence (only a few miles from my current residence, so I’ll still be in Columbus.)  Searching for a new place, filling out paperwork, and all of the other work associated with this move has taken away a good chunk of the time I used to devote to blogging.  Once everything gets settled out with the move and job change I’ll re-evaluate how much time I can devote to blogging.     A big thanks to Sogeti and everyone who has been so supportive over my time with them.  It’s hard to move on, but I am excited for the prospects that the future will bring.         -Frog Out

    Read the article

  • Is white the best base color to start with when planning to shade sprites within Unity?

    - by SpartanDonut
    I'm looking into prototyping a game in Unity which will consist of solid square sprites / tiles. I figure I can represent different types of objects with different colors for each of the tiles in the game. I figure that I can import a single square sprite and shade it appropriately in Unity as opposed to imported squares of many different colors. My experience with adjusting the hue and saturation within Photoshop shows that white is not an easy color to change as things that are white often stay white. My testing in Unity shows that I can change the "color" of a sprite to anything other than white and the sprite is seemingly shaded appropriately, despite what I would have thought given my Photoshop experience. Since white objects do seem to take on the appropriate color shading when changed within Unity my gut tells me that this is the best base color to begin with, meaning that I can import a single white square sprite and simply adjust the color to represent different objects and object states. Is a white sprite actually the best color sprite to begin with and why does something like this work in Unity as opposed to adjusting the hue and saturation within Photoshop?

    Read the article

  • how to fully unit test functions and their internal validation

    - by Patrick
    I am just now getting into formal unit testing and have come across an issue in testing separate internal parts of functions. I have created a base class of data manipulation (i.e.- moving files, chmodding file, etc) and in moveFile() I have multiple levels of validation to pinpoint when a moveFile() fails (i.e.- source file not readable, destination not writeable). I can't seem to figure out how to force a couple particular validations to fail while not tripping the previous validations. Example: I want the copying of a file to fail, but by the time I've gotten to the actual copying, I've checked for everything that can go wrong before copying. Code Snippit: (Bad code on the fifth line...) // if the change permissions is set, change the file permissions if($chmod !== null) { $mod_result = chmod($destination_directory.DIRECTORY_SEPARATOR.$new_filename, $chmod); if($mod_result === false || $source_directory.DIRECTORY_SEPARATOR.$source_filename == '/home/k...../file_chmod_failed.qif') { DataMan::logRawMessage('File permissions update failed on moveFile [ERR0009] - ['.$destination_directory.DIRECTORY_SEPARATOR.$new_filename.' - '.$chmod.']', sfLogger::ALERT); return array('success' => false, 'type' => 'Internal Server Error [ERR0009]'); } } So how do I simulate the copy failing. My stop-gap measure was to perform a validation on the filename being copied and if it's absolute path matched my testing file, force the failure. I know this is very bad to put testing code into the actual code that will be used to run on the production server but I'm not sure how else to do it. Note: I am on PHP 5.2, symfony, using lime_test(). EDIT I am testing the chmodding and ensuring that the array('success' = false, 'type' = ..) is returned

    Read the article

  • Option Button in Keyboard Layout > Input Sources is not pressable

    - by user98647
    I would like to set the Caps-lock key as a Compose key, which you do, as far as I remember, by pressing the Options Button in Keyboard Layout Input Sources and then enabling the appropriate option there. That Button is not pressable though since I switched to 12.10. It did work in previous releases of Ubuntu. gnome-control-center puts out these errors, when I click on Keyboard Layout: (gnome-control-center:3645): common-cc-panel-WARNING **: Could not find current language '?\u0003C!\u007f' in the treeview (gnome-control-center:3645): common-cc-panel-WARNING **: locale '"en_US.UTF-8"' isn't valid I'm not sure if the errors are related though, maybe they are related to the "interface switching to chinese bug" which seems surprisingly widespread: Language changed to Chinese, how do I change it back? Language Support has an unwanted Chinese language option Nautilus Folders Turned Chinese Desktop 12.04 gnome/cairo suddenly in Chinese Unwanted Chinese language got set in system settings I cannot set my system back to English from Chinese Language Gnome-classic language turned into Chinese, how do I change it back to English? Strange display language in gnome shell I'm not sure they are related to this bug, but I just wanted to mention it, maybe it helps!

    Read the article

  • Install Windows7 on drive with Ubuntu 12.04 already on. Is my plan good?

    - by John F
    I have Ubuntu 12.04 working fine, but need W7 occasionally. I just wanted to check that my plan for installing would work? Any help appreciated. Current partitions are: Partition....@ File System @ Mount Point @ Size.....@ Used.....@ Flags /dev/sda1....@ ext4........@ /ext4a......@ 37 GiB...@ 776 MiB..@ boot /dev/sda2....@ extended....@.............@ 122 GiB..@ -........@ ./dev/sda5...@ ext4........@ / ..........@ 37 GiB...@ 6 GiB....@ .unallocated @ unallocated @.............@ 7 GiB....@ - ...... @ ./dev/sda6 ..@ ext4........@ /home.......@ 77 GiB...@ 32 GiB...@ .unallocated @ unallocated @.............@ 65 GiB...@ - .......@ /dev/sda3...@ linux-swap..@.............@ 7 GiB....@ - .......@ My plan is to: - boot to ubuntu from USB ISO - change sda1 to NTFS - install W7 to sda1 - use the "Master Boot Record repair" utility to configure dual boot so I can see my original ubuntu installation as well as W7. Have I missed something? I'm concerned as to what the 776MB is that will be overwritten by the change to NTFS. It seems large for just the MBR? Would also appreciate it if anyone can explain what sda5 and 6 are being used for? Is sda5 Ubuntu and sda6 my data? Thanks in advance.

    Read the article

  • MySQL Workbench 5.2.39 GA Released

    - by user13164789
    The MySQL Developer Tools team is announcing the next maintenance release of its flagship product, MySQL Workbench, version 5.2.39. This version contains MySQL Utilities 1.0.5, a set of command line Python utilities for helping to perform and script various administration tasks for MySQL. A complete list of changes in this release of the Utilities can be found at:http://dev.mysql.com/doc/workbench/en/wb-utils-news-1-0-5.html MySQL Workbench 5.2 GA • Data Modeling • Query (replaces the old MySQL Query Browser) • Administration (replaces the old MySQL Administrator) Please get your copy from our Download site. Sources and binary packages are available for several platforms, including Windows, Mac OS X and Linux. http://dev.mysql.com/downloads/workbench/ Workbench Documentation can be found here. http://dev.mysql.com/doc/workbench/en/index.html Utilities Documentation can be found here.http://dev.mysql.com/doc/workbench/en/mysql-utilities.html In addition to the new Query/SQL Development and Administration modules, version 5.2 features improved stability and performance – especially in Windows, where OpenGL support has been enhanced and the UI was optimized to offer better responsiveness. This release also includes improvements to the scripting capabilities of the SQL Editor. You can read more about it in http://wb.mysql.com/workbench/doc/ For a detailed list of resolved issues, see the change log. http://dev.mysql.com/doc/workbench/en/wb-change-history.html If you need any additional info or help please get in touch with us. Post in our forums or leave comments on our blog pages. - The MySQL Workbench Team

    Read the article

  • Which version management design methodology to be used in a Dependent System nodes?

    - by actiononmail
    This is my first question so please indicate if my question is too vague and not understandable. My question is more related to High Level Design. We have a system (specifically an ATCA Chassis) configured in a Star Topology, having Master Node (MN) and other sub-ordinate nodes(SN). All nodes are connected via Ethernet and shall run on Linux OS with other proprietary applications. I have to build a recovery Framework Design so that any software entity, whether its Linux, Ramdisk or application can be rollback to previous good versions if something bad happens. Thus I think of maintaining a State Version Matrix over MN, where each State(1,2....n) represents Good Kernel, Ramdisk and application versions for each SN. It may happen that one SN version can dependent on other SN's version. Please see following diagram:- So I am in dilemma whether to use Package Management Methodology used by Debian Distributions (Like Ubuntu) or GIT repository methodology; in order to do a Rollback to previous good versions on either one SN or on all the dependent SNs. The method should also be easier for upgrading SNs along with MNs. Some of the features which I am trying to achieve:- 1) Upgrade of even single software entity is achievable without hindering others. 2) Dependency checks must be done before applying rollback or upgrade on each of the SN 3) User Prompt should be given in case dependency fails.If User still go for rollback, all the SNs should get notification to rollback there own releases (if required). 4) The binaries should be distributed on SNs accordingly so that recovery process is faster; rather fetching every time from MN. 5) Release Patches from developer for bug fixes, feature enhancement can be applied on running system. 6) Each version can be easily tracked and distinguishable. Thanks

    Read the article

  • How do I develop database-utilizing application in an agile/test-driven-development way?

    - by user39019
    I want to add databases (traditional client/server RDBMS's like Mysql/Postgresql as opposed to NoSQL, or embedded databases) to my toolbox as a developer. I've been using SQLite for simpler projects with only 1 client, but now I want to do more complicated things (ie, db-backed web development). I usually like following agile and/or test-driven-development principles. I generally code in Perl or Python. Questions: How do I test my code such that each run of the test suite starts with a 'pristine' state? Do I run a separate instance of the database server every test? Do I use a temporary database? How do I design my tables/schema so that it is flexible with respect to changing requirements? Do I start with an ORM for my language? Or do I stick to manually coding SQL? One thing I don't find appealing is having to change more than one thing (say, the CREATE TABLE statement and associated crud statements) for one change, b/c that's error prone. On the other hand, I expect ORM's to be a low slower and harder to debug than raw SQL. What is the general strategy for migrating data between one version of the program and a newer one? Do I carefully write ALTER TABLE statements between each version, or do I dump the data and import fresh in the new version?

    Read the article

  • How to Edit PDFs?

    - by snowguy
    I typically have two needs: Scenario A. Change a single PDF page. In this case I have a PDF but not the original source file used to create the PDF. I don't want to try to recreate the document from scratch. I'd like to open the PDF and change a few things. A good example of this scenario: I was responsible for planning a big event at a campground site, I had a PDF of the site. I wanted to start with that document, highlight some parts, add some labels, remove some parts that weren't relevant. or Scenario B. Combine PDFs or extract information from a PDF This scenario usually arises because I want a single PDF deliverable that is made up of parts that are best created in different programs. In this case I have the source files for all the documents but they don't play well enough together to easily create a single PDF deliverable. For part of it, I may want to use Libre Office Writer. For another page I may want to use Gimp. Still another page I may use Libre Office Calc. I could use Writer as the master document and embed images or the Calc object into that, but for ultimate control, you can't beat separate PDF documents that are then combined. What are the best tools / processes for editing PDFs in Ubuntu?

    Read the article

  • Finding back to an old project that was turned upside-down by the developer. Your workflow?

    - by Kreativrandale
    after some time I'm asked to work on a heavy web-project I did (layout, html/css) about a year ago. There are some changes that have to be made, basically some css and js stuff. By now the whole project was turned upside down by the developer. It gives me a hard time to connect to the work of him, especially because my old files and file-structure won't work anymore. Thats why I need a up-to-date working-environment, but I don't want to change the files on the server directly. Need some testing and improving while doing this. So, what is your workflow in such a case? Thought about copying the whole/parts of the server to a own homeserver. But even that will be a big task for me (I'm more the front-end-guy). Would be great if theres a way to shrink it down (php, mysql,...), since I only need to change some css/html javascript. Are there any tools available? Love to hear how you handle such situations. Thanks a lot!

    Read the article

  • Transformation?

    - by Joe G
    I started working at Oracle in 1997.  Since then, we (and most everyone) have been talking about transforming finance operations....but what does that mean exactly?  From my perspective, I thought it meant eliminate waste and menial tasks and giving your finance team more time to work on more strategic things.  That seems logical and simplistic, but how much progress have finance teams (and their IT departments) really made over the past fifteen years? I have yet to talk to a customer that doesn't have one amusing task that makes me chuckle.  Sometimes they still print hard copies of transactions to "file," or sometimes they print 700 pages of data to "analyze," or sometimes they cut and paste from one or more reports into a spreadsheet.  Upon hearing these things, my first question is always, "Why do you do that?" to which their response is rarely the same.    Sometimes it's related to trust (both the employee and the system).  Sometimes, it's habit-based.  And sometimes it is just impossible to accomplish the end result without some manual effort. I will say that I used to print nearly everything that I needed to review.  Partly, because I liked having the ability to scribble notes on the paper, and partly, because it was uncomfortable to read online.  However, I have changed. Rarely do I print anything anymore.  It's easier for me to read and notate online, and well, I guess I've just changed my habits. So where do you think our resistance to change comes from?  Is it truly deficits in our systems, or is it our own personal resistance to change?  What's your most annoying & untransformed task?

    Read the article

  • Depending on fixed version of a library and ignore its updates

    - by Moataz Elmasry
    I was talking to a technical boss yesterday. Its about a project in C++ that depends on opencv and he wanted to include a specific opencv version into the svn and keep using this version ignoring any updates which I disagreed with.We had a heated discussion about that. His arguments: Everything has to be delivered into one package and we can't ask the client to install external libraries. We depend on a fixed version so that new updates of opencv won't screw our code. We can't guarantee that within a version update, ex from 3.2.buildx to 3.2.buildy. Buildy the function signatures won't change. My arguments: True everything has to be delivered to the client as one package,but that's what build scripts are for. They download the external libraries and create a bundle. Within updates of the same version 3.2.buildx to 3.2.buildy its impossible that a signature change, unless it is a really crappy framework, which isn't the case with opencv. We deprive ourselves from new updates and features of that library. If there's a bug in the version we took, and even if there's a bug fix later, we won't be able to get that fix. Its simply ineffiecient and anti design to depend on a certain version/build of an external library as it makes our project difficult in the future to adopt to new changes. So I'd like to know what you guys think. Does it really make sense to include a specific version of external library in our svn and keep using it ignoring all updates?

    Read the article

  • SEO: Getting site to show in location-specific searches

    - by willvv
    I'm really new to this SEO world and I've been reading a lot to try and figure it out. We have a site moodbond.com that allows users to browse/create events anywhere. And we fill it with content from the main cities in the US. We would like it to show for searches for things like "events in san francisco" or "what to do in new york", however, since the site is not really location-specific, I'm not really sure where to begin. I've been thinking a couple of things, maybe you can help me decide if these would be a good way to start or if I should try something different. 1- Allow something like location-specific urls (e.g. moodbond.com/browse/san-francisco) could just show the main page centered in San Francisco. 2- Change the headers/title of the page so it adapts automatically to the city being browsed (and change this dynamically as the user changes the location of the map). 3- Add internal links to different locations (e.g. add a link at the footer of the page that says "Events in Seattle" that makes the site load events in that city. (this would probably depend on implementing #1). What do you guys think? will any of these really help or should I look for a different approach? any advice is welcome. Thanks

    Read the article

  • XNA- Transforming children

    - by user1806687
    So, I have a Model stored in MyModel, that is made from three meshes. If you loop thrue MyModel.Meshes the first two are children of the third one. And was just wondering, if anyone could tell me where is the problem with my code. This method is called whenever I want to programmaticly change the position of a whole model: public void ChangePosition(Vector3 newPos) { Position = newPos; MyModel.Root.Transform = Matrix.CreateScale(VectorMathHelper.VectorMath(CurrentSize, DefaultSize, '/')) * Matrix.CreateFromAxisAngle(MyModel.Root.Transform.Up, MathHelper.ToRadians(Rotation.Y)) * Matrix.CreateFromAxisAngle(MyModel.Root.Transform.Right, MathHelper.ToRadians(Rotation.X)) * Matrix.CreateFromAxisAngle(MyModel.Root.Transform.Forward, MathHelper.ToRadians(Rotation.Z)) * Matrix.CreateTranslation(Position); Matrix[] transforms = new Matrix[MyModel.Bones.Count]; MyModel.CopyAbsoluteBoneTransformsTo(transforms); int count = transforms.Length - 1; foreach (ModelMesh mesh in MyModel.Meshes) { mesh.ParentBone.Transform = transforms[count]; count--; } } This is the draw method: foreach (ModelMesh mesh in MyModel.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.View = camera.view; effect.Projection = camera.projection; effect.World = mesh.ParentBone.Transform; effect.EnableDefaultLighting(); } mesh.Draw(); } The thing is when I call ChangePosition() the first time everything works perfectlly, but as soon as I call it again and again. The first two meshes(children meshes) start to move away from the parent mesh. Another thing I wanted to ask, if I change the scale/rotation/position of a children mesh, and then do CopyAbsoluteBoneTransforms() will children meshes be positioned properlly(at the proper distance) or would achieving that require more math/methods? Thanks in advance

    Read the article

  • Rotate a vector

    - by marc wellman
    I want my first-person camera to smoothly change its viewing direction from direction d1 to direction d2. The latter direction is indicated by a target position t2. So far I have implemented a rotation that works fine but the speed of the rotation slows down the closer the current direction gets to the desired one. This is what I want to avoid. Here are the two very simple methods I have written so far: // this method initiates the direction change and sets the parameter public void LookAt(Vector3 target) { _desiredDirection = target - _cameraPosition; _desiredDirection.Normalize(); _rotation = new Matrix(); _rotationAxis = Vector3.Cross(Direction, _desiredDirection); _isLooking = true; } // this method gets execute by the Update()-method if _isLooking flag is up. private void _lookingAt() { dist = Vector3.Distance(Direction, _desiredDirection); // check whether the current direction has reached the desired one. if (dist >= 0.00001f) { _rotationAxis = Vector3.Cross(Direction, _desiredDirection); _rotation = Matrix.CreateFromAxisAngle(_rotationAxis, MathHelper.ToRadians(1)); Direction = Vector3.TransformNormal(Direction, _rotation); } else { _onDirectionReached(); _isLooking = false; } } Again, rotation works fine; camera reaches its desired direction. But the speed is not equal over the course of movement - it slows down. How to achieve a rotation with constant speed ?

    Read the article

  • Tips on how to notify a user of new features in your game (Android)

    - by brent777
    I have noticed a problem when releasing new features for a game that I wrote for Android and published on Google Play Store. Because my game is "stage-based" - and not a game like Hay Day, for example, where users will just go into the game every day since it can't really be finished - my users are not aware of new features that I release for the game. For example, if I publish a new version of my game and it contains a couple new stages, most of their devices will just auto-update the game and they don't even notice this and think to check out what's new. So this is why an approach like popping open a dialog that showcases the new feature(s) when they open the game for the first time after the update was done is not really sufficient. I am looking for some tips on an approach that will draw my users back into the game and then they could read more detail about new features on such a dialog. I was thinking of something like a notification that tells them to check out the new features after an update is done but I am not sure if this is a good idea. Any suggestions to help me solve this problem would be awesome.

    Read the article

  • How to choose between Tell don't Ask and Command Query Separation?

    - by Dakotah North
    The principle Tell Don't Ask says: you should endeavor to tell objects what you want them to do; do not ask them questions about their state, make a decision, and then tell them what to do. The problem is that, as the caller, you should not be making decisions based on the state of the called object that result in you then changing the state of the object. The logic you are implementing is probably the called object’s responsibility, not yours. For you to make decisions outside the object violates its encapsulation. A simple example of "Tell, don't Ask" is Widget w = ...; if (w.getParent() != null) { Panel parent = w.getParent(); parent.remove(w); } and the tell version is ... Widget w = ...; w.removeFromParent(); But what if I need to know the result from the removeFromParent method? My first reaction was just to change the removeFromParent to return a boolean denoting if the parent was removed or not. But then I came across Command Query Separation Pattern which says NOT to do this. It states that every method should either be a command that performs an action, or a query that returns data to the caller, but not both. In other words, asking a question should not change the answer. More formally, methods should return a value only if they are referentially transparent and hence possess no side effects. Are these two really at odds with each other and how do I choose between the two? Do I go with the Pragmatic Programmer or Bertrand Meyer on this?

    Read the article

  • Computer Says No: Mobile Apps Connectivity Messages

    - by ultan o'broin
    Sharing some insight into connectivity messages for mobile applications. Based on some recent ethnography done my myself, and prompted by a real business case, I would recommend a message that: In plain language, briefly and directly tells the user what is wrong and why. Something like: Cannot connect because of a network problem. Affords the user a means to retry connecting (or attempts automatically). Mobile context of use means users use anticipate interruptibility and disruption of task, so they will try again as an effective course of action. Tells the user when connection is re-established, and off they go. Saves any work already done, implicitly. (Bonus points on the ADF critical task setting scale) The following images showing my experience reading ADF-EMG Google Groups notification my (Android ICS) Samsung Galaxy S2 during a loss of WiFi give you a good idea of a suitable kind of messaging user experience for mobile apps in this kind of scenario. Inline connection lost message with Retry button Connection re-established toaster message The UX possible is dependent on device and platform features, sure, so remember to integrate with the device capability (see point 10 of this great article on mobile design by Brent White and Lynn Hnilo-Rampoldi) but taking these considerations into account is far superior to a context-free dumbed down common error message repurposed from the desktop mentality about the connection to the server being lost, so just "Click OK" or "Contact your sysadmin.".

    Read the article

  • Quantify value for management

    - by nivlam
    We have two different legacy systems (window services in this case) that do exactly the same thing. Both of these systems have small differences for the different applications they serve. Both of these system's core functionality lies within a shared library. Most of the time, the updates occur in the shared library and we simply deploy the updated library to both of these systems. The systems themselves rarely change. Since both of these systems do essentially the same thing, our development team would like to consolidate these two systems into a single service. What can I do to convince management to allocate time for such a task? Some of the points I've noted are: Easier maintenance Decrease testing/QA time Unfortunately, this isn't enough. They would like us to provide them with hard numbers on the amount of hours this will save in the future and how this will speed up future development. Since most of the work is done in the shared library and the systems themselves never change, it's hard for us to quantify how many hours this will save. What kind of arguments can I make to justify the extra work to consolidate these systems?

    Read the article

< Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >