Search Results

Search found 22308 results on 893 pages for 'floating point'.

Page 612/893 | < Previous Page | 608 609 610 611 612 613 614 615 616 617 618 619  | Next Page >

  • Achieving decoupling in Model classes

    - by Guven
    I am trying to test-drive (or at least write unit tests) my Model classes but I noticed that my classes end up being too coupled. Since I can't break this coupling, writing unit tests is becoming harder and harder. To be more specific: Model Classes: These are the classes that hold the data in my application. They resemble pretty much the POJO (plain old Java objects), but they also have some methods. The application is not too big so I have around 15 model classes. Coupling: Just to give an example, think of a simple case of Order Header - Order Item. The header knows the item and the item knows the header (needs some information from the header for performing certain operations). Then, let's say there is the relationship between Order Item - Item Report. The item report needs the item as well. At this point, imagine writing tests for Item Report; you need have a Order Header to carry out the tests. This is a simple case with 3 classes; things get more complicated with more classes. I can come up with decoupled classes when I design algorithms, persistence layers, UI interactions, etc... but with model classes, I can't think of a way to separate them. They currently sit as one big chunk of classes that depend on each other. Here are some workarounds that I can think of: Data Generators: I have a package that generates sample data for my model classes. For example, the OrderHeaderGenerator class creates OrderHeaders with some basic data in it. I use the OrderHeaderGenerator from my ItemReport unit-tests so that I get an instance to OrderHeader class. The problem is these generators get complicated pretty fast and then I also need to test these generators; defeating the purpose a little bit. Interfaces instead of dependencies: I can come up with interfaces to get rid of the hard dependencies. For example, the OrderItem class would depend on the IOrderHeader interface. So, in my unit tests, I can easily mock the behaviour of an OrderHeader with a FakeOrderHeader class that implements the IOrderHeader interface. The problem with this approach is the complexity that the Model classes would end up having. Would you have other ideas on how to break this coupling in the model classes? Or, how to make it easier to unit-test the model classes?

    Read the article

  • 4.8M wasn't enough so we went for 5.055M tpmc with Unbreakable Enterprise Kernel r2 :-)

    - by wcoekaer
    We released a new set of benchmarks today. One is an updated tpc-c from a few months ago where we had just over 4.8M tpmc at $0.98 and we just updated it to go to 5.05M and $0.89. The other one is related to Java Middleware performance. You can find the press release here. Now, I don't want to talk about the actual relevance of the benchmark numbers, as I am not in the benchmark team. I want to talk about why these numbers and these efforts, unrelated to what they mean to your workload, matter to customers. The actual benchmark effort is a very big, long, expensive undertaking where many groups work together as a big virtual team. Having the virtual team be within a single company of course helps tremendously... We already start with a very big server setup with tons of storage, many disks, lots of ram, lots of cpu's, cores, threads, large database setups. Getting the whole setup going to start tuning, by itself, is no easy task, but then the real fun starts with tuning the system for optimal performance -and- stability. A benchmark is not just revving an engine at high rpm, it's actually hitting the circuit. The tests require long runs, require surviving availability tests, such as surviving crashes -and- recovery under load. In the TPC-C example, the x4800 system had 4TB ram, 160 threads (8 sockets, hyperthreaded, 10 cores/socket), tons of storage attached, tons of luns visible to the OS. flash storage, non flash storage... many things at high scale that all have to be perfectly synchronized. During this process, we find bugs, we fix bugs, we find performance issues, we fix performance issues, we find interesting potential features to investigate for the future, we start new development projects for future releases and all this goes back into the products. As more and more customers, for Oracle Linux, are running larger and larger, faster and faster, more mission critical, higher available databases..., these things are just absolutely critical. Unrelated to what anyone's specific opinion is about tpc-c or tpc-h or specjenterprise etc, there is a ton of effort that the customer benefits from. All this work makes Oracle Linux and/or Oracle Solaris better platforms. Whether it's faster, more stable, more scalable, more resilient. It helps. Another point that I always like to re-iterate around UEK and UEK2 : we have our kernel source git repository online. Complete changelog of the mainline kernel, and our changes, easy to pull, easy to dissect, easy to know what went in when, why and where. No need to go log into a website and manually click through pages to hopefully discover changes or patches. No need to untar 2 tar balls and run a diff.

    Read the article

  • Attaching two objects and changing their world matrices accordingly

    - by A-Type
    I'm having a hard time wrapping my head around the transformations required to bind two objects together in either a two-way or one-way relationship. I will need to implement both types. For the first case, I want to be able to 'couple' two ships together in space. The ships have different mass, of course. Forces applied to either ship will use combined mass and moment of inertia to calculate and move both ships. The trick is, being sure that the point at which they are coupled remains the same, and they don't move at all relative to each other. The second case is similar: I want a ship to be able to enter the atmosphere of a planet and move relative to the planet. The planet will be orbiting the sun, which is fixed at 0,0,0. Essentially, when the ship is sitting still outside of the atmosphere, the planet will move past it on its course-- but when the ship is sitting still inside the atmosphere, it moves and rotates with the planet, so that it is always relative to the horizon. Essentially, the vertices which make up the ship are now transformed just like the ones that make up the planet, except that the ship can move itself around relative to the planet. I get the feeling I can implement both of these with the same code. Essentially, I am thinking of giving each object (which I call Fixtures) a list of "slave" Fixtures onto which that Fixture's world matrix is imposed. So, this would be the planet imposing its world on any contained ships. In the case of coupling, I would simply make each ship a slave of the other, somehow. Obviously I can't just multiply the ship's world matrix by the planet's, or each ship by the others. What I'd like some help with is what calculations to make in order to get a nice, seamless relative world to the other object. I was thinking maybe I could just multiply the world of the slave by the inverse of the master, but then when you couple two ships you would lose all that world data. So, perhaps I need an intermediate "world" which is the absolute world, but use a secondary "final world" to actually transform the objects?

    Read the article

  • How can I achieve a 3D-like effect with spritebatch's rotation and scale parameters

    - by Alic44
    I'm working on a 2d game with a top-down perspective similar to Secret of Mana and the 2D Final Fantasy games, with one big difference being that it's an action rpg using a 3-dimensional physics engine. I'm trying to draw an aimer graphic (basically an arrow) at my characters' feet when they're aiming a ranged weapon. At first I just converted the character's aim vector to radians and passed that into spritebatch, but there was a problem. The position of every object in my world is scaled for perspective when it's drawn to the screen. So if the physics engine coordinates are (1, 0, 1), the screen coords are actually (1, .707) -- the Y and Z axis are scaled by a perspective factor of .707 and then added together to get the screen coordinates. This meant that the direction the aimer graphic pointed (thanks to its rotation value passed into spritebatch) didn't match up with the direction the projectile actually traveled over time. Things looked fine when the characters fired left, right, up, or down, but if you fired on a diagonal the perspective of the physics engine didn't match with the simplistic way I was converting the character's aim direction to a screen rotation. Ok, fast forward to now: I've got the aimer's rotation matched up with the path the projectile will actually take, which I'm doing by decomposing a transform matrix which I build from two rotation matrices (one to represent the aimer's rotation, and one to represent the camera's 45 degree rotation on the x axis). My question is, is there a way to get not just rotation from a series of matrix transformations, but to also get a Vector2 scale which would give the aimer the appearance of being a 3d object, being warped by perspective? Orthographic perspective is what I'm going for, I think. So, the aimer arrow would get longer when facing sideways, and shorter when facing north and south because of the perspective. At the same time, it would get wider when facing north and south, and less wide when facing right or left. I'd like to avoid actually drawing the aimer texture in 3d because I'm still using spritebatch's layerdepth parameter at this point in my project, and I don't want to have to figure out how to draw a 3d object within the depth sorting system I already have. I can provide code and more details if this is too vague as a question... This is my first post on stack exchange. Thanks a lot for reading! Note: (I think) I realize it can't be a technically correct 3D perspective, because the spritebatch's vector2 scaling argument doesn't allow for an object to be skewed the way it actually should be. What I'm really interested in is, is there a good way to fake the effect, or should I just drop it and not scale at all? Edit to clarify without the help of a picture (apparently I can't post them yet): I want the aimer arrow to look like it has been painted on the ground at the character's feet, so it should appear to be drawn on the ground plane (in my case the XZ plane) which should be tilted at a 45 degree angle (around the X axis) from the viewing perspective. Alex

    Read the article

  • Criteria for selecting timeout value?

    - by stijn
    Situation: a piece of software reads frames of data from a file in a seperate thread and puts it on a queue, emptied by another thread. That second thread periodically checks on the queue and fails rather gracefully, by showing an error message stating the read timed out, if no data is available within a certain amount of time. Initially this timeout was set to 200mSec. There was no real reasoning behind that constant though, but it worked fine. We measured on a couple of machines and for large data frames, larger than what would be used by customers, a read took like 20mSec whith no other load on the machine. However one customer now gets timeout errors now and then (on the second try all is fine, probably the file is in cache or the virus scanner leaves it alone). The programmers are like 'well, yeah, but that customer's machine is full of cruft, virus scanners, tons of unneeded background processes etc'. Of course the customer is like 'hey this should just work, shouldn't it'? While the programers have a point, since the software is heavy enough to validate the need for a dedicated machine, that does not make the customer happy. Increasing the timeout to 2 seconds, for example, solves the problem. But I'd like to make a proper decision now instead of just randomly pick some magic constant that is probably ok in 99% of cases. What criteria should be used for that? We could just pick a large number, but that feels wrong. (and then we end up with a program that has the horrible bahaviour of hanging when trying to read from a disconnected drive for instance, whereas we'd rather make it show an error right away). Or we could make the timeout value a user setting, but then we need to ducument it clearly and even then not all customers are tech savy enough to really understand what it does. Or we could try and wait until another customer reports timeouts and increase the value again. And again. Until we find something ok for 99.99% of the cases.. Any good practice for this type of situation?

    Read the article

  • Quaternion based rotation and pivot position

    - by Michael IV
    I can't figure out how to perform matrix rotation using Quaternion while taking into account pivot position in OpenGL.What I am currently getting is rotation of the object around some point in the space and not a local pivot which is what I want. Here is the code [Using Java] Quaternion rotation method: public void rotateTo3(float xr, float yr, float zr) { _rotation.x = xr; _rotation.y = yr; _rotation.z = zr; Quaternion xrotQ = Glm.angleAxis((xr), Vec3.X_AXIS); Quaternion yrotQ = Glm.angleAxis((yr), Vec3.Y_AXIS); Quaternion zrotQ = Glm.angleAxis((zr), Vec3.Z_AXIS); xrotQ = Glm.normalize(xrotQ); yrotQ = Glm.normalize(yrotQ); zrotQ = Glm.normalize(zrotQ); Quaternion acumQuat; acumQuat = Quaternion.mul(xrotQ, yrotQ); acumQuat = Quaternion.mul(acumQuat, zrotQ); Mat4 rotMat = Glm.matCast(acumQuat); _model = new Mat4(1); scaleTo(_scaleX, _scaleY, _scaleZ); _model = Glm.translate(_model, new Vec3(_pivot.x, _pivot.y, 0)); _model =rotMat.mul(_model);//_model.mul(rotMat); //rotMat.mul(_model); _model = Glm.translate(_model, new Vec3(-_pivot.x, -_pivot.y, 0)); translateTo(_x, _y, _z); notifyTranformChange(); } Model matrix scale method: public void scaleTo(float x, float y, float z) { _model.set(0, x); _model.set(5, y); _model.set(10, z); _scaleX = x; _scaleY = y; _scaleZ = z; notifyTranformChange(); } Translate method: public void translateTo(float x, float y, float z) { _x = x - _pivot.x; _y = y - _pivot.y; _z = z; _position.x = _x; _position.y = _y; _position.z = _z; _model.set(12, _x); _model.set(13, _y); _model.set(14, _z); notifyTranformChange(); } But this method in which I don't use Quaternion works fine: public void rotate(Vec3 axis, float angleDegr) { _rotation.add(axis.scale(angleDegr)); // change to GLM: Mat4 backTr = new Mat4(1.0f); backTr = Glm.translate(backTr, new Vec3(_pivot.x, _pivot.y, 0)); backTr = Glm.rotate(backTr, angleDegr, axis); backTr = Glm.translate(backTr, new Vec3(-_pivot.x, -_pivot.y, 0)); _model =_model.mul(backTr);///backTr.mul(_model); notifyTranformChange(); }

    Read the article

  • Per-pixel collision detection - why does XNA transform matrix return NaN when adding scaling?

    - by JasperS
    I looked at the TransformCollision sample on MSDN and added the Matrix.CreateTranslation part to a property in my collision detection code but I wanted to add scaling. The code works fine when I leave scaling commented out but when I add it and then do a Matrix.Invert() on the created translation matrix the result is NaN ({NaN,NaN,NaN},{NaN,NaN,NaN},...) Can anyone tell me why this is happening please? Here's the code from the sample: // Build the block's transform Matrix blockTransform = Matrix.CreateTranslation(new Vector3(-blockOrigin, 0.0f)) * // Matrix.CreateScale(block.Scale) * would go here Matrix.CreateRotationZ(blocks[i].Rotation) * Matrix.CreateTranslation(new Vector3(blocks[i].Position, 0.0f)); public static bool IntersectPixels( Matrix transformA, int widthA, int heightA, Color[] dataA, Matrix transformB, int widthB, int heightB, Color[] dataB) { // Calculate a matrix which transforms from A's local space into // world space and then into B's local space Matrix transformAToB = transformA * Matrix.Invert(transformB); // When a point moves in A's local space, it moves in B's local space with a // fixed direction and distance proportional to the movement in A. // This algorithm steps through A one pixel at a time along A's X and Y axes // Calculate the analogous steps in B: Vector2 stepX = Vector2.TransformNormal(Vector2.UnitX, transformAToB); Vector2 stepY = Vector2.TransformNormal(Vector2.UnitY, transformAToB); // Calculate the top left corner of A in B's local space // This variable will be reused to keep track of the start of each row Vector2 yPosInB = Vector2.Transform(Vector2.Zero, transformAToB); // For each row of pixels in A for (int yA = 0; yA < heightA; yA++) { // Start at the beginning of the row Vector2 posInB = yPosInB; // For each pixel in this row for (int xA = 0; xA < widthA; xA++) { // Round to the nearest pixel int xB = (int)Math.Round(posInB.X); int yB = (int)Math.Round(posInB.Y); // If the pixel lies within the bounds of B if (0 <= xB && xB < widthB && 0 <= yB && yB < heightB) { // Get the colors of the overlapping pixels Color colorA = dataA[xA + yA * widthA]; Color colorB = dataB[xB + yB * widthB]; // If both pixels are not completely transparent, if (colorA.A != 0 && colorB.A != 0) { // then an intersection has been found return true; } } // Move to the next pixel in the row posInB += stepX; } // Move to the next row yPosInB += stepY; } // No intersection found return false; }

    Read the article

  • following a moving sprite

    - by iQue
    Im trying to get my enemies to follow my main-character of the game (2D), but for some reason the game starts lagging like crazy when I do it the way I want to do it, and the following-part dosnt work 100% either, its just 1/24 enemies that comes to my sprite, the other 23 move towards it but stay at a certain point. Might be a poor explenation but dont know how else to put it. Code for moving my enemies: private int enemyX(){ int x = 0; for (int i = 0; i < enemies.size(); i++){ if (controls.pointerPosition.x > enemies.get(i).getX()){//pointerPosition is the position of my main-sprite. x = 5; } else{ x=-5; } Log.d(TAG, "happyX HERE: " + controls.pointerPosition.x); Log.d(TAG, "enemyX HERE: " + enemies.get(i).getX()); } return x; } private int enemyY(){ int y = 0; for (int i = 0; i < enemies.size(); i++){ if (controls.pointerPosition.y > enemies.get(i).getY()){ y = 5; } else{ y=-5; } } return y; } I send it to the update-method in my Enemy-class: private void drawEnemy(Canvas canvas){ addEnemies(); // a method where I add enemies to my arrayList, no parameters except bitmap. for(int i = 0; i < enemies.size(); i++){ enemies.get(i).update(enemyX(), enemyY()); } for(int i = 0; i < enemies.size(); i++){ enemies.get(i).draw(canvas); } } and finally, the update-method itself, located in my Enemy-class: public void update(int velX, int velY) { x += velX; //sets x before I draw y += velY; //sets y before I draw currentFrame = ++currentFrame % BMP_COLUMNS; } So can any1 figure out why it starts lagging so much and how I can fix it? Thanks for your time!

    Read the article

  • OOF checklist

    - by Daniel Moth
    When going on vacation or otherwise being out of office (known as OOF in Microsoft), it is polite and professional that our absence creates the minimum disruption possible to the rest of the business, and especially our colleagues. Below is my OOF checklist - I try to do these as soon as I know I'll be OOF, rather than leave it for the night before. Let the relevant folks on the team know the planned dates of absence and check if anybody was expecting something from you during that timeframe. Reset expectations with them, and as applicable try to find another owner for individual activities that cannot wait. Go through your calendar for the OOF period and decline every meeting occurrence so the owner of the meeting knows that you won't be attending (similar to my post about responding to invites). If it is your meeting cancel it so that people don’t turn up without the meeting organizer being there. Do this even for meetings were the folks should know due to step #1. Over-communicating is a good thing here and keeps calendars all around up to date. Enter your OOF dates in whatever tool your company uses. Typically that is the notification to your manager. In your Outlook calendar, create a local Appointment (don't invite anyone) for the date range (All day event) setting the "Show As" dropdown to "Out of Office". This way, people won’t try to schedule meetings with you on that day. If you use Lync, set the status to "Off Work" for that period. If you won't be responding to email (which when on your vacation you definitely shouldn't) then in Outlook setup "Automatic Replies (Out of Office)" for that period. This way people won’t think you are rude when not replying to their emails. In your OOF message point to an alias (ideally of many people) as a fallback for urgent queries. If you want to proactively notify individuals of your OOFage then schedule and send a multi-day meeting request for the entire period. Remember to set the "Show As" to "Free" (so their calendar doesn’t show busy/oof to others), set the "Reminder" to "None" (so they don’t get a reminder about it), set "Low Importance", and uncheck both "Response Options" so if they don't want this on their calendar, it is just one click for them to get rid of it. Aside: I have another post with advice on sending invites. If you care about people who would not observe the above but could drop by your office, stick a physical OOF note at your office door or chair/monitor or desk. Have I missed any? Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Is there an API for determining congressional districts?

    - by ardavis
    I'm looking to determine the congressional district based on an address my user is providing. This will avoid having the user to look it up themselves. Does an API of this sort exist? Note Through my attempts to find one, I've only come across these: http://www.govtrack.us/developers/api (not sure how to submit an an address or zip code however) The following resources are available in the API ...Bills and resolutions in the U.S. Congress since 1973 (the 93rd Congress). ...A (bill, person) pair indicating cosponsorship, with join and withdrawn dates. ...Members of Congress and U.S. Presidents since the founding of the nation. ...Terms held in office by Members of Congress and U.S. Presidents. Each term corresponds with an election, meaning each term in the House covers two years (one 'Congress'), as President four years, and in the Senate six years (three 'Congresses'). ...Roll call votes in the U.S. Congress since 1789. How people voted is accessed through the Vote_voter API. ...How people voted on roll call votes in the U.S. Congress since 1789. See the Vote API. Filter on the vote field to get the results of a particular vote... http://www.opencongress.org/api (seems to be a way to find congress information, but not districts) This API provides programmers with structured access to all the data on OpenCongress, everything from official bill info to news and blog coverage to user-generated votes on bills and much more... This API defaults to returning XML. All queries can also return JSON... https://groups.google.com/forum/?fromgroups=#!topic/opendems-discuss/CeKyi_aANaE (similar question, no resolution) I've been looking over Open Dems, and seeing what's exposed at this point and what isn't. I work with Democrats Abroad, and am interested in using stuff from the lab for their sites. I quickly looked over the Precinct API, which does both more and less than what I'd need. An ideal resource would be any way of translating addresses into CD at the very least (getting state district data would be good as well), since that would make it easier for DA's membership to make a difference in races like last month's NY26 race... Update I'm looking at the source for the govtrack.us website and the 'doGeoCode' function may be useful. view-source:http://www.govtrack.us/congress/members If no one has any suggestions, I will try to go off of what they are doing.

    Read the article

  • Creating Corporate Windows Phone Applications

    - by Tim Murphy
    Most developers write Windows Phone applications for their own gratification and their own wallets.  While most of the time I would put myself in the same camp, I am also a consultant.  This means that I have corporate clients who want corporate solutions.  I recently got a request for a system rebuild that includes a Windows Phone component.  This brought up the questions of what are the important aspects to consider when building for this situation. Let’s break it down in to the points that are important to a company using a mobile application.  The company want to make sure that their proprietary software is safe from use by unauthorized users.  They also want to make sure that the data is secure on the device. The first point is a challenge.  There is no such thing as true private distribution in the Windows Phone ecosystem at this time.  What is available is the ability to specify you application for targeted distribution.  Even with targeted distribution you can’t ensure that only individuals within your organization will be able to load you application.  Because of this I am taking two additional steps.  The first is to register the phone’s DeviceUniqueId within your system.  Add a system sign-in and that should cover access to your application. The second half of the problem is securing the data on the phone.  This is where the ProtectedData API within the System.Security.Cryptography namespace comes in.  It allows you to encrypt your data before pushing it to isolated storage on the device. With the announcement of Windows Phone 8 coming this fall, many of these points will have different solutions.  Private signing and distribution of applications will be available.  We will also have native access to BitLocker.  When you combine these capabilities enterprise application development for Windows Phone will be much simpler.  Until then work with the above suggestions to develop your enterprise solutions. del.icio.us Tags: Windows Phone 7,Windows Phone,Corporate Deployment,Software Design,Mango,Targeted Applications,ProtectedData API,Windows Phone 8

    Read the article

  • Sql Table Refactoring Challenge

    Ive been working a bit on cleaning up a large table to make it more efficient.  I pretty much know what I need to do at this point, but I figured Id offer up a challenge for my readers, to see if they can catch everything I have as well as to see if Ive missed anything.  So to that end, I give you my table: CREATE TABLE [dbo].[lq_ActivityLog]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [PlacementID] [int] NOT NULL, [CreativeID] [int] NOT NULL, [PublisherID] [int] NOT NULL, [CountryCode] [nvarchar](10) NOT NULL, [RequestedZoneID] [int] NOT NULL, [AboveFold] [int] NOT NULL, [Period] [datetime] NOT NULL, [Clicks] [int] NOT NULL, [Impressions] [int] NOT NULL, CONSTRAINT [PK_lq_ActivityLog2] PRIMARY KEY CLUSTERED ( [Period] ASC, [PlacementID] ASC, [CreativeID] ASC, [PublisherID] ASC, [RequestedZoneID] ASC, [AboveFold] ASC, [CountryCode] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY] And now some assumptions and additional information: The table has 200,000,000 rows currently PlacementID ranges from 1 to 5000 and should support at least 50,000 CreativeID ranges from 1 to 5000 and should support at least 50,000 PublisherID ranges from 1 to 500 and should support at least 50,000 CountryCode is a 2-character ISO standard (e.g. US) and there is a country table with an integer ID already.  There are < 300 rows. RequestedZoneID ranges from 1 to 100 and should support at least 50,000 AboveFold has values of 1, 0, or 1 only. Period is a date (no time). Clicks range from 0 to 5000. Impressions range from 0 to 5000000. The table is currently write-mostly.  Its primary purpose is to log advertising activity as quickly as possible.  Nothing in the rest of the system reads from it except for batch jobs that pull the data into summary tables. Heres the current information on the database tables size: Design Goals This table has been in use for about 5 years and has performed very well during that time.  The only complaints we have are that it is quite large and also there are occasionally timeouts for queries that reference it, particularly when batch jobs are pulling data from it.  Any changes should be made with an eye toward keeping write performance optimal  while trying to reduce space and improve read performance / eliminate timeouts during read operations. Refactor There are, I suggest to you, some glaringly obvious optimizations that can be made to this table.  And Im sure there are some ninja tweaks known to SQL gurus that would be a big help as well.  Ill post my own suggested changes in a follow-up post for now feel free to comment with your suggestions. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Critical Patch Updates During EBS 11i Exception to Sustaining Support Period

    - by Elke Phelps (Oracle Development)
    As previously blogged in the EBS 11i and 12.1 Support Timeline Changes entry, two important changes to the Oracle Lifetime Support policies were announced at Oracle OpenWorld 2012 - San Francisco.  These changes affect E-Business Suite Releases 11i and 12.1. Critical Patch Updates for EBS 11i during the Exception to Sustaining Support Period You may be wondering about the availability of Critical Patch Updates (CPU) for EBS 11i during the Exception to Sustaining Support period.  The following details the E-Business Suite Critical Patch Update support policy for EBS 11i during the Exception to Sustaining Support period: Oracle will continue to provide CPUs containing critical security fixes for E-Business Suite 11i.  CPUs will be packaged and released as as cumulative patches for both ATG RUP 6 and ATG RUP 7. As always, we try to minimize the number of patches and dependencies required for uptake of a CPU; however, there have been quite a few changes to the 11i baseline since its release.  For dependency reasons the 11i CPUs may require a higher number of files in order to bring them up to a consistent, stable, and well tested level. EBS 11i customer will continue to receive CPUs up to and including the October 2014 CPU. Where can I learn more? There are two interlocking policies that affect the E-Business Suite:  Oracle's Lifetime Support policies for each EBS release (timelines which were updated by this announcement), and the Error Correction Support policies (which state the minimum baselines for new patches). For more information about how these policies interact, see: Understanding Support Windows for E-Business Suite Releases What about E-Business Suite technology stack components? Things get more complicated when one considers individual techstack components such as Oracle Forms or the Oracle Database.  To learn more about the interlocking EBS+techstack component support windows, see these two articles: On Apps Tier Patching and Support: A Primer for E-Business Suite Users On Database Patching and Support: A Primer for E-Business Suite Users Where can I learn more about Critical Patch Updates?The Critical Patch Update Advisory is the starting point for relevant information. It includes a list of products affected, pointers to obtain the patches, a summary of the security vulnerabilities, and links to other important documents.  Related Articles EBS 11i and 12.1 Support Timeline Changes Frequently Asked Questions about Latest EBS Support Changes Extended Support Fees Waived for E-Business Suite 11i and 12.0

    Read the article

  • Problem upgrading 11.04

    - by Krazy_Kaos
    I've been trying to upgrade my ubuntu 11.04 desktop computer, but when I click on the ugrade button: I get this error: I've tryied to change my repositories, but it changes nothing in the error((on the "setting new software channel"). Can someone point me in the right direction? This is my sources.list: # deb http://ppa.launchpad.net/ailurus/ppa/ubuntu karmic main # disabled on upgrade to karmic # deb-src http://ppa.launchpad.net/ailurus/ppa/ubuntu karmic main # disabled on upgrade to karmic # deb cdrom:[Ubuntu 9.04 _Jaunty Jackalope_ - Release i386 (20090421.3)]/ jaunty main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://us.archive.ubuntu.com/ubuntu/ natty main restricted multiverse universe ## Major bug fix updates produced after the final release of the ## distribution. deb http://us.archive.ubuntu.com/ubuntu/ natty-updates main restricted multiverse universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. ## Uncomment the following two lines to add software from the 'backports' ## repository. ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb-src http://pt.archive.ubuntu.com/ubuntu/ jaunty-backports main restricted universe multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http://archive.canonical.com/ubuntu natty partner deb-src http://archive.canonical.com/ubuntu natty partner deb http://us.archive.ubuntu.com/ubuntu/ natty-security main restricted multiverse universe deb http://us.archive.ubuntu.com/ubuntu/ natty-proposed restricted main multiverse universe # deb http://deb.torproject.org/torproject.org karmic main # disabled on upgrade to maverick # deb-src http://deb.torproject.org/torproject.org karmic main # disabled on upgrade to maverick deb http://extras.ubuntu.com/ubuntu natty main #Third party developers repository

    Read the article

  • C Minishell Command Expansion Printing Gibberish

    - by Optimus_Pwn
    I'm writing a unix minishell in C, and am at the point where I'm adding command expansion. What I mean by this is that I can nest commands in other commands, for example: $> echo hello $(echo world! ... $(echo and stuff)) hello world! ... and stuff I think I have it working mostly, however it isn't marking the end of the expanded string correctly, for example if I do: $> echo a $(echo b $(echo c)) a b c $> echo d $(echo e) d e c See it prints the c, even though I didn't ask it to. Here is my code: msh.c - http://pastebin.com/sd6DZYwB expand.c - http://pastebin.com/uLqvFGPw I have a more code, but there's a lot of it, and these are the parts that I'm having trouble with at the moment. I'll try to tell you the basic way I'm doing this. Main is in msh.c, here it gets a line of input from either the commandline or a shellfile, and then calls processline (char *line, int outFD, int waitFlag), where line is the line we just got, outFD is the file descriptor of the output file, and waitFlag tells us whether or not we should wait if we fork. When we call this from main we do it like this: processline (buffer, 1, 1); In processline, we allocate a new line: char expanded_line[EXPANDEDLEN]; We then call expand, in expand.c: expand(line, expanded_line, EXPANDEDLEN); In expand, we copy the characters literally from line to expanded_line until we find a $(, which then calls: static int expCmdOutput(char *orig, char *new, int *oldl_ind, int *newl_ind) orig is line, and new is expanded line. oldl_ind and newl_ind are the current positions in the line and expanded line, respectively. Then we pipe, and recursively call processline, passing it the nested command(for example, if we had "echo a $(echo b)", we would pass processline "echo b"). This is where I get confused, each time expand is called, is it allocating a new chunk of memory EXPANDEDLEN long? If so, this is bad because I'll run out of stack room really quickly(in the case of a hugely nested commandline input). In expand I insert a null character at the end of the expanded string, so why is it printing past it? If you guys need any more code, or explanations, just ask. Secondly, I put the code in pastebin because there's a ton of it, and in my experience people don't like it when I fill up several pages with code. Thanks.

    Read the article

  • What's Bringing SharePoint 2007 Server to a hault?

    - by juanlarios
    I've been having issues with my teste environment and I'm hoping someone has run into this problem and can point me in the right direction. I noticed: SharePoint Server Memory is through the roof at times and so is the CPU usage. Most of CPU usage is a sql proccess. Running out of disk space all the time. I looked in the Logs located in the 12 hive and sure enough I have 1G log files that are hard to open because of the size. The following are the 3 error messages that are flooding my SharePoint logs:   04/05/2010 16:02:36.99     OWSTIMER.EXE (0x0B94)                       0x0BA4    Windows SharePoint Services       Timer                             5uuf    Monitorable    The previous instance of the timer job 'Variations Propagate Page Job Definition', id '{F9A73EB4-90FE-4574-AD99-B4034056F915}' for service '{F89169F9-707B-4588-9ED0-E6D399FE5E3D}' is still running, so the current instance will be skipped.  Consider increasing the interval between jobs.    04/05/2010 15:59:51.51     OWSTIMER.EXE (0x0B94)                       0x0BA4    Windows SharePoint Services       Timer                             5uuf    Monitorable    The previous instance of the timer job 'Profile Synchronization', id '{A05E3439-8DCD-449A-9D9E-46D601CACAA2}' for service '{F89169F9-707B-4588-9ED0-E6D399FE5E3D}' is still running, so the current instance will be skipped.  Consider increasing the interval between jobs.     04/05/2010 15:56:25.53     OWSTIMER.EXE (0x0B94)                       0x0BA4    Windows SharePoint Services       Timer                             5uuf    Monitorable    The previous instance of the timer job 'Scheduled Unpublish', id '{6298F93F-388D-46B9-809E-CEDBB8659661}' for service '{F89169F9-707B-4588-9ED0-E6D399FE5E3D}' is still running, so the current instance will be skipped.  Consider increasing the interval between jobs.     04/05/2010 15:54:14.73     OWSTIMER.EXE (0x0B94)                       0x0BA4    Windows SharePoint Services       Timer                             5uuf    Monitorable    The previous instance of the timer job 'Config Refresh', id '{C42DA970-3DA3-4AA2-94E5-8499C5B80A3E}' for service '{7F6D2CBE-8071-4A30-B313-7C9989FC2D87}' is still running, so the current instance will be skipped.  Consider increasing the interval between jobs.       I'm googling around but haven't found much. I know one other person posted something about this back in 2008, but no answers were reached. I have already checked the databases to see if any of them have gone offline for whatever reason, but from SQL everything is fine. I recently re-created an SSP and deleted an old ssp. So I thought maybe that was causing it, and who knows? maybe that causes some of the problems or maybe all. I'm running configuration wizard and see if anything changes. Please if someone has had similar issues let me know.

    Read the article

  • With AMD style modules in JavaScript is there any benefit to namespaces?

    - by gman
    Coming from C++ originally and seeing lots of Java programmers doing the same we brought namespaces to JavaScript. See Google's closure library as an example where they have a main namespace, goog and under that many more namespaces like goog.async, goog.graphics But now, having learned the AMD style of requiring modules it seems like namespaces are kind of pointless in JavaScript. Not only pointless but even arguably an anti-pattern. What is AMD? It's a way of defining and including modules that removes all direct dependencies. Effectively you do this // some/module.js define([ 'name/of/needed/module', 'name/of/someother/needed/module', ], function( RefToNeededModule, RefToSomeOtherNeededModule) { ...code... return object or function }); This format lets the AMD support code know that this module needs name/of/needed/module.js and name/of/someother/needed/module.js loaded. The AMD code can load all the modules and then, assuming no circular dependencies, call the define function on each module in the correct order, record the object/function returned by the module as it calls them, and then call any other modules' define function with references to those modules. This seems to remove any need for namespaces. In your own code you can call the reference to any other module anything you want. For example if you had 2 string libraries, even if they define similar functions, as long as they follow the AMD pattern you can easily use both in the same module. No need for namespaces to solve that. It also means there's no hard coded dependencies. For example in Google's closure any module could directly reference another module with something like var value = goog.math.someMathFunc(otherValue) and if you're unlucky it will magically work where as with AMD style you'd have to explicitly include the math library otherwise the module wouldn't have a reference to it since there are no globals with AMD. On top of that dependency injection for testing becomes easy. None of the code in the AMD module references things by namespace so there is no hardcoded namespace paths, you can easily mock classes at testing time. Is there any other point to namespaces or is that something that C++ / Java programmers are bringing to JavaScript that arguably doesn't really belong?

    Read the article

  • Function like C# properties?

    - by alan2here
    I was directed here from SO as a better stack exchange site for this question. I've been thinking about the neatness and expression of C# properties over functions, although they only currently work where no parameters are used, and wondered. Is is possible, and if so why not, to have a stand alone function like C# property. For example: public class test { private byte n = 4; public test() { func = 2; byte n2 = func; func; } private byte func { get { return n; } set { n = value; } func { n++; } } } edit: Sorry for the vagueness first time round. I'm going to add some info and motivation. The 'n++' here is just a simple example, a placeholder, it's not intended to be representative of the actual code that would be used. I'm also looking at this from the point of view of looking at the property command as is, not in the context of using it for 'get_xyz' and 'set_xyz' member functions, which is certainly useful, but of instead comparing it more abstractly to functions and other programic elements. A 'get' property can be used instead of a function that takes no parameters, and syntactically they are perhaps only aesthetically, but as I see it noticeably nicer. However, properties also add the potential for an extra layer of polymorphism, one that relates to the 'func = 4;' getting, 'int n = func;' setting or 'func;' function like context in which they are used as well as the more common parameter based polymorphism. Potentially allowing for a lot of expression and contextual information reguarding how other would use your functions. As in many places uses and definitions would remain the same, it shouldn't break existing code. private byte func { get { } get bool { } set { } func { } func(bool) { } func(byte, myType) { } // etc... } So a read only function would look like this: private byte func { get { } } A normal function like this: private void func { func { } } A function with parameter polymorphism like this: private byte func { func(bool) { } func(byte, myType) { } } And a function that could return a value, or just compute, depending on the context it is used, that also has more conventional parameter polymorphism as well, like so: private byte func { get { } func(bool) { } func(byte, myType) { } }

    Read the article

  • Using the @ in SQL Azure Connections

    - by BuckWoody
    The other day I was working with a client on an application they were changing to a hybrid architecture – some data on-premise and other data in SQL Azure and Windows Azure Blob storage. I had them make a couple of corrections - the first was that all communications to SQL Azure need to be encrypted. It’s a simple addition to the connection string, depending on the library you use. Which brought up another interesting point. They had been using something that looked like this, using the .NET provider: Server=tcp:[serverName].database.windows.net;Database=myDataBase; User ID=LoginName;Password=myPassword; Trusted_Connection=False;Encrypt=True; This includes most of the formatting needed for SQL Azure. It specifies TCP as the transport mechanism, the database name is included, Trusted_Connection is off, and encryption is on. But it needed one more change: Server=tcp:[serverName].database.windows.net;Database=myDataBase; User ID=[LoginName]@[serverName];Password=myPassword; Trusted_Connection=False;Encrypt=True; Notice the difference? It’s the User ID parameter. It includes the @ symbol and the name of the server – not the whole DNS name, just the server name itself. The developers were a bit surprised, since it had been working with the first format that just used the user name. Why did both work, and why is one better than the other? It has to do with the connection library you use. For most libraries, the user name is enough. But for some libraries (subject to change so I don’t list them here) the server name parameter isn’t sent in the way the load balancer understands, so you need to include the server name right in the login, so the system can parse it correctly. Keep in mind, the string limit for that is 128 characters – so take the @ symbol and the server name into consideration for user names. The user connection info is detailed here: http://msdn.microsoft.com/en-us/library/ee336268.aspx Upshot? Include the @servername on your connection string just to be safe. And plan for that extra space…  

    Read the article

  • Cone of Uncertainty in classic and agile projects

    - by DigiMortal
    David Starr from Scrum.org made interesting session in TechEd Europe 2012 - Implementing Scrum Using Team Foundation Server 2012. One of interesting things for me was how Cone of Uncertainty looks like in agile projects (or how agile methodologies distort the cone we know from waterfall projects). This posting illustrates two cones – one for waterfall and one for agile world. Cone of Uncertainty Cone of Uncertainty was introduced to software development community by Steve McConnell and it visualizes how accurate are our estimates over project timeline. Here is the Cone of Uncertainty when we deal with waterfall and Big Design Up-Front (BDUF). Cone of Uncertainty. Taken from MSDN Library page Estimating. The closer we are to project end the more accurate are our estimates. When project ends we know exactly how much every task took time. As we can see then cone is wide when we usually have to give our estimates – it happens somewhere between Initial Project Concept and Requirements Complete. Don’t ask me why Initial Project Concept is the stage where some companies give their best estimates – they just do it every time and doesn’t learn a thing later. This cone is inevitable for software development and agile methodologies that try to make software world better are also able to change the cone. Cone of Uncertainty in agile projects Agile methodologies usually try to avoid BDUF, waterfalls and other things that make all our mistakes highly expensive. Of course, we are not the only ones who make mistakes – don’t also forget our dear customers. Agile methodologies take development as creational work and focus on making it better. One main trick is to focus on small and short iterations. What it means? We are estimating functionalities that are easier for us to understand and implement. Therefore our estimates are more accurate. As we move from few big iterations to many small iterations we also distort and slice Cone of Uncertainty. This is how cone looks when agile methodologies are used. Cone of Uncertainty in agile projects. We have more cones to live with but they are way smaller. I don’t have any numbers to put here because I found any but still this “chart” should give you the point: more smaller iterations cause more but way smaller cones of uncertainty. We can handle these small uncertainties because steps we take to complete small tasks are more predictable and doesn’t grow very often above our heads. One more note. Consider that both of charts given in this posting describe exactly the same phase of same project – just uncertainties are different.

    Read the article

  • How to keep a data structure synchronized over a network?

    - by David Gouveia
    Context In the game I'm working on (a sort of a point and click graphic adventure), pretty much everything that happens in the game world is controlled by an action manager that is structured a bit like: So for instance if the result of examining an object should make the character say hello, walk a bit and then sit down, I simply spawn the following code: var actionGroup = actionManager.CreateGroup(); actionGroup.Add(new TalkAction("Guybrush", "Hello there!"); actionGroup.Add(new WalkAction("Guybrush", new Vector2(300, 300)); actionGroup.Add(new SetAnimationAction("Guybrush", "Sit")); This creates a new action group (an entire line in the image above) and adds it to the manager. All of the groups are executed in parallel, but actions within each group are chained together so that the second one only starts after the first one finishes. When the last action in a group finishes, the group is destroyed. Problem Now I need to replicate this information across a network, so that in a multiplayer session, all players see the same thing. Serializing the individual actions is not the problem. But I'm an absolute beginner when it comes to networking and I have a few questions. I think for the sake of simplicity in this discussion we can abstract the action manager component to being simply: var actionManager = new List<List<string>>(); How should I proceed to keep the contents of the above data structure syncronized between all players? Besides the core question, I'm also having a few other concerns related to it (i.e. all possible implications of the same problem above): If I use a server/client architecture (with one of the players acting as both a server and a client), and one of the clients has spawned a group of actions, should he add them directly to the manager, or only send a request to the server, which in turn will order every client to add that group? What about packet losses and the like? The game is deterministic, but I'm thinking that any discrepancy in the sequence of actions executed in a client could lead to inconsistent states of the world. How do I safeguard against that sort of problem? What if I add too many actions at once, won't that cause problems for the connection? Any way to alleviate that?

    Read the article

  • Why wifi doesn't work in this case?

    - by xRobot
    I have a brand new notebook where I have installed Windows 7 and Ubuntu 12.04 LTS 64bit in dual boot. In windows 7 wifi works but in Ubuntu not. Could you help me please ? iwconfig lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off eth0 no wireless extensions. lshw -C network *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:01:00.0 logical name: eth0 version: 07 serial: b4:b5:1f:1b:9a:56 size: 10Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half firmware=rtl8168e-3_0.0.4 03/27/12 latency=0 link=no multicast=yes port=MII speed=10Mbit/s resources: irq:41 ioport:3000(size=256) memory:c2404000-c2404fff memory:c2400000-c2403fff *-network description: Wireless interface product: Ralink corp. vendor: Ralink corp. physical id: 0 bus info: pci@0000:02:00.0 logical name: wlan0 version: 00 serial: 84:4b:f4:0a:3a:22 width: 32 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=rt2800pci driverversion=3.2.0-31-generic firmware=0.34 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:18 memory:c2500000-c250ffff lspci | grep -i net 01:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 07) 02:00.0 Network controller: Ralink corp. Device 539a iwlist scan lo Interface doesn't support scanning. wlan0 Interface doesn't support scanning : Device or resource busy eth0 Interface doesn't support scanning. lsmod Module Size Used by rfcomm 47604 0 bnep 18281 2 bluetooth 180104 10 rfcomm,bnep parport_pc 32866 0 ppdev 17113 0 snd_hda_codec_hdmi 32474 1 snd_hda_codec_realtek 224173 1 joydev 17693 0 hp_wmi 18092 0 sparse_keymap 13890 1 hp_wmi snd_hda_intel 33773 3 snd_hda_codec 127706 3 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel snd_hwdep 13668 1 snd_hda_codec snd_pcm 97188 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec snd_seq_midi 13324 0 snd_rawmidi 30748 1 snd_seq_midi snd_seq_midi_event 14899 1 snd_seq_midi snd_seq 61896 2 snd_seq_midi,snd_seq_midi_event snd_timer 29990 2 snd_pcm,snd_seq snd_seq_device 14540 3 snd_seq_midi,snd_rawmidi,snd_seq psmouse 97362 0 snd 78855 16 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device arc4 12529 2 rt2800pci 18715 0 rt2800lib 58925 1 rt2800pci crc_ccitt 12667 1 rt2800lib rt2x00pci 14577 1 rt2800pci rt2x00lib 51144 3 rt2800pci,rt2800lib,rt2x00pci mac80211 506816 3 rt2800lib,rt2x00pci,rt2x00lib soundcore 15091 1 snd mac_hid 13253 0 uvcvideo 72627 0 videodev 98259 1 uvcvideo v4l2_compat_ioctl32 17128 1 videodev wmi 19256 1 hp_wmi i915 473240 3 cfg80211 205544 2 rt2x00lib,mac80211 eeprom_93cx6 12725 1 rt2800pci drm_kms_helper 46978 1 i915 drm 242038 4 i915,drm_kms_helper i2c_algo_bit 13423 1 i915 snd_page_alloc 18529 2 snd_hda_intel,snd_pcm mei 41616 0 serio_raw 13211 0 video 19596 1 i915 lp 17799 0 parport 46562 3 parport_pc,ppdev,lp usbhid 47199 0 hid 99559 1 usbhid r8169 62099 0 rfkill list: # rfkill list 0: phy0: Wireless LAN Soft blocked: no Hard blocked: no 1: hp-wifi: Wireless LAN Soft blocked: no Hard blocked: no

    Read the article

  • SOA Summit - Oracle Session Replay

    - by Bruce Tierney
    If you think you missed the most recent Integration Developer News (IDN) "SOA Summit" 2013...good news, you didn't.  At least not the replay of the Oracle session titled: Three Solutionsfor Simplifying Cloud/On-Premises Integration As you will see in the reply below, this session introduces Three common reasons for integration complexity: Disparate Toolkits Lack of API Management Rigid, Brittle Infrastructure and then the Three solutions to these challenges: Unify Cloud On-premises Integration Enable Multi-channel Development with API Management Plan for the Unexpected - Future Readiness The last solution on future readiness describes how you can transition from being reactive to new trends, such as the Internet of Things (IoT), by modifying your integration strategy to enable business agility and how to recognize trends through Fast Data event processing ahead of your competition. Oracle SOA Suite customer SFpark's (San Francisco Metropolitan Transit Authority) implementation with API Management is covered as shown in the screenshot to the right This case study covers the core areas of API Management for partners to build their own applications by leveraging parking availability and real-time pricing as well as mobile enablement of data integrated by SOA Suite underneath.  Download the free SFpark app from the Apple and Android app stores to check it out. When looking into the future, the discussion starts with a historical look to better prepare for what comes next.   As shown in the image below, one of the next frontiers after mobile and cloud integration is a deeper level of direct "enterprise to customer" interaction.  Much of this relates to the Internet of Things.  Examples of IoT from the perspective of SOA and integration is also covered in the session. For example, early adopter Turkcell and their tracking of mobile phone users as they move from point A to B to C is shown in the image the right.   As you look into more "smart services" such as Location-Based Services, how "future ready" is your application infrastructure?  . . . Check out the replay by clicking the video image below to learn about these three challenges and solution including how to "future ready" your application infrastructure:

    Read the article

  • Got Samba, Got PyNeighbourhood but still no connection. What else do I need?

    - by Frank A
    I am sure I had already hit post before but then could only find it by backing through browser. Was it deleted? is the question too dumb, sorry that I do not know the right jargon just trying to get answers to my problem anyway have reworded stuff a bit This seems to be a number one requirement for lots of people and 2 months on from setting up my Ubuntu pc, I am still unable to get a lasting connection in either direction. Adding a windows pc to a network is so easy... just a few clicks and get on with using it all. Using all command approaches and modifying configuration files is hardly user friendly. Googling brings up thousands of solutions but mostly they are too techy or assume the user is fully aware of how to use Linux. I do realise that their must be a lot of flavours for connecting to networks. So far I have installed Samba and fiddled with its config file. The day I did all that it worked from XP to Ubuntu. When I came back two days later to transfer my data over it would not connect. Although the the share does show up in Windows (XP) My Network Places. Today I installed PyNeighbourhood and this shows the Ubuntu box and all of the shares I had created at some point on Ubuntu and it even shows this under the XP workgroup name. But instructions on setting the connection up seem to relate to an earlier version and nothing seems to work there either. (I unshared most of those test folders but they still show up her but that is another question. When I click on mount- I can only click on one on the Ubuntu machine, there is one with no name so I assume this to be my attempt to add one XP Shared drive using ipaddress, I get errors. (gksu:9767): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (gksu:9767): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (gksu:9767): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (gksu:9767): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", mount error(6): No such device or address Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) OK tried to find the manual referred to... only an old comment that manual would be produced for future versions. I saw in another thread that Winbind is needed as well or at least I assume as well? Totally lost again? Please help, what else needs to be installed to connect to win pcs on the network.

    Read the article

  • Collision in PyGame for spinning rectangular object.touching circles

    - by OverAchiever
    I'm creating a variation of Pong. One of the differences is that I use a rectangular structure as the object which is being bounced around, and I use circles as paddles. So far, all the collision handling I've worked with was using simple math (I wasn't using the collision "feature" in PyGame). The game is contained within a 2-dimensional continuous space. The idea is that the rectangular structure will spin at different speed depending on how far from the center you touch it with the circle. Also, any extremity of the rectangular structure should be able to touch any extremity of the circle. So I need to keep track of where it has been touched on both the circle and the rectangle to figure out the direction it will be bounced to. I intend to have basically 8 possible directions (Up, down, left, right and the half points between each one of those). I can work out the calculation of how the objected will be dislocated once I get the direction it will be dislocated to based on where it has been touch. I also need to keep track of where it has been touched to decide if the rectangular structure will spin clockwise or counter-clockwise after it collided. Before I started coding, I read the resources available at the PyGame website on the collision class they have (And its respective functions). I tried to work out the logic of what I was trying to achieve based on those resources and how the game will function. The only thing I could figure out that I could do was to make each one of these objects as a group of rectangular objects, and depending on which rectangle was touched the other would behave accordingly and give the illusion it is a single object. However, not only I don't know if this will work, but I also don't know if it is gonna look convincing based on how PyGame redraws the objects. Is there a way I can use PyGame to handle these collision detections by still having a single object? Can I figure out the point of collision on both objects using functions within PyGame precisely enough to achieve what I'm looking for? P.s: I hope the question was specific and clear enough. I apologize if there were any grammar mistakes, English is not my native language.

    Read the article

< Previous Page | 608 609 610 611 612 613 614 615 616 617 618 619  | Next Page >