Search Results

Search found 43645 results on 1746 pages for 'student question'.

Page 281/1746 | < Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >

  • How to maintain symlinks in linux file manager?

    - by MountainX
    I want to use symlinks extensively. However, if I move the target file, the symlink becomes broken (unlike on Windows). That's not acceptable to me, so I either need a solution or I won't be able to use symlinks the way I wish to. Is there a solution that will work with Dolphin file manager? A command line solution is described on commandlinefu. In summary, it is something like one of these: lmv(){for a in ${@:1:$(expr $#-1)};do [ -e "$a" -a -e "${@:$#:1}" ] && mv "$a";"${@:$#:1}" && ln -s "${@:$#:1}"/"$(basename "$a")";"$(dirname "$a")";done} lmv(){for a in ${@:1:$(expr $#-1)};do [ -e "$a" -a -e "${@:$#}" ] && mv "$a";"${@:$#}" && ln -s "${@:$#}"/"$(basename "$a")";"$(dirname "$a")";done} But about half the time I'm using a file manager (Dolphin), so I need a complete solution to this problem. Is a solution available for a GUI file manager? EDIT: The context of this question is that I'm searching for an alternative to hardlinks. I previously asked this question about the pitfalls of hardlinks.

    Read the article

  • Deleting a row from self-referencing table

    - by Jake Rutherford
    Came across this the other day and thought “this would be a great interview question!” I’d created a table with a self-referencing foreign key. The application was calling a stored procedure I’d created to delete a row which caused but of course…a foreign key exception. You may say “why not just use a the cascade delete option?” Good question, easy answer. With a typical foreign key relationship between different tables which would work. However, even SQL Server cannot do a cascade delete of a row on a table with self-referencing foreign key. So, what do you do?…… In my case I re-wrote the stored procedure to take advantage of recursion:   -- recursively deletes a Foo ALTER PROCEDURE [dbo].[usp_DeleteFoo]      @ID int     ,@Debug bit = 0    AS     SET NOCOUNT ON;     BEGIN TRANSACTION     BEGIN TRY         DECLARE @ChildFoos TABLE         (             ID int         )                 DECLARE @ChildFooID int                        INSERT INTO @ChildFoos         SELECT ID FROM Foo WHERE ParentFooID = @ID                 WHILE EXISTS (SELECT ID FROM @ChildFoos)         BEGIN             SELECT TOP 1                 @ChildFooID = ID             FROM                 @ChildFoos                             DELETE FROM @ChildFoos WHERE ID = @ChildFooID                         EXEC usp_DeleteFoo @ChildFooID         END                                    DELETE FROM dbo.[Foo]         WHERE [ID] = @ID                 IF @Debug = 1 PRINT 'DEBUG:usp_DeleteFoo, deleted - ID: ' + CONVERT(VARCHAR, @ID)         COMMIT TRANSACTION     END TRY     BEGIN CATCH         ROLLBACK TRANSACTION         DECLARE @ErrorMessage VARCHAR(4000), @ErrorSeverity INT, @ErrorState INT         SELECT @ErrorMessage = ERROR_MESSAGE(), @ErrorSeverity = ERROR_SEVERITY(), @ErrorState = ERROR_STATE()         IF @ErrorState <= 0 SET @ErrorState = 1         INSERT INTO ErrorLog(ErrorNumber,ErrorSeverity,ErrorState,ErrorProcedure,ErrorLine,ErrorMessage)         VALUES(ERROR_NUMBER(), @ErrorSeverity, @ErrorState, ERROR_PROCEDURE(), ERROR_LINE(), @ErrorMessage)         RAISERROR (@ErrorMessage, @ErrorSeverity, @ErrorState)     END CATCH   This procedure will first determine any rows which have the row we wish to delete as it’s parent. It then simply iterates each child row calling the procedure recursively in order to delete all ancestors before eventually deleting the row we wish to delete.

    Read the article

  • How to handle multiple effect files in XNA

    - by Adam 'Pi' Burch
    So I'm using ModelMesh and it's built in Effects parameter to draw a mesh with some shaders I'm playing with. I have a simple GUI that lets me change these parameters to my heart's desire. My question is, how do I handle shaders that have unique parameters? For example, I want a 'shiny' parameter that affects shaders with Phong-type specular components, but for an environment mapping shader such a parameter doesn't make a lot of sense. How I have it right now is that every time I call the ModelMesh's Draw() function, I set all the Effect parameters as so foreach (ModelMesh m in model.Meshes) { if (isDrawBunny == true)//Slightly change the way the world matrix is calculated if using the bunny object, since it is not quite centered in object space { world = boneTransforms[m.ParentBone.Index] * Matrix.CreateScale(scale) * rotation * Matrix.CreateTranslation(position + bunnyPositionTransform); } else //If not rendering the bunny, draw normally { world = boneTransforms[m.ParentBone.Index] * Matrix.CreateScale(scale) * rotation * Matrix.CreateTranslation(position); } foreach (Effect e in m.Effects) { Matrix ViewProjection = camera.ViewMatrix * camera.ProjectionMatrix; e.Parameters["ViewProjection"].SetValue(ViewProjection); e.Parameters["World"].SetValue(world); e.Parameters["diffuseLightPosition"].SetValue(lightPositionW); e.Parameters["CameraPosition"].SetValue(camera.Position); e.Parameters["LightColor"].SetValue(lightColor); e.Parameters["MaterialColor"].SetValue(materialColor); e.Parameters["shininess"].SetValue(shininess); //e.Parameters //e.Parameters["normal"] } m.Draw(); Note the prescience of the example! The solutions I've thought of involve preloading all the shaders, and updating the unique parameters as needed. So my question is, is there a best practice I'm missing here? Is there a way to pull the parameters a given Effect needs from that Effect? Thank you all for your time!

    Read the article

  • Euler Problem 1 : Code Optimization / Alternatives [on hold]

    - by Sudhakar
    I am new bee into the world of Datastructures and algorithms from ground up. This is my attempt to learn. If the question is very plain/simple . Please bear with me. Problem: Find the sum of all the multiples of 3 or 5 below 1000. Code i worte: package problem1; public class Problem1 { public static void main(String[] args) { //******************Approach 1**************** long start = System.currentTimeMillis(); int total = 0; int toSubtract = 0; //Complexity N/3 int limit = 10000; for(int i=3 ; i<limit ;i=i+3){ total = total +i; } //Complexity N/5 for(int i=5 ; i<limit ;i=i+5){ total = total +i; } //Complexity N/15 for(int i=15 ; i<limit ;i=i+15){ toSubtract = toSubtract +i; } //9N/15 = 0.6 N System.out.println(total-toSubtract); System.out.println("Completed in "+(System.currentTimeMillis() - start)); //******************Approach 2**************** for(int i=3 ; i<limit ;i=i+3){ total = total +i; } for(int i=5 ; i<limit ;i=i+5){ if ( 0 != (i%3)) total = total +i; } } } Question 1 - Which best approach from the above code and why ? 2 - Are there any better alternatives ?

    Read the article

  • How do I check on non-transparent pixels in a bitmapdata?

    - by Opoe
    I'm still working on my window cleaning game from one of my previous questions I marked a contribution as my answer, but after all this time I can't get it to work and I have to many questions about this so I decided to ask some more about it. As a sequel on my mentioned previous question, my question to you is: How can I check whether or not a bitmapData contains non transparent pixels? Subquestion: Is this possible when the masked image is a movieclip? Shouldn't I use graphics instead? Information I have: A dirtywindow movieclip on the bottom layer and a clean window movieclip on layer 2(mc1) on the layer above. To hide the top layer(the dirty window) I assign a mask to it. Code // this creates a mask that hides the movieclip on top var mask_mc:MovieClip = new MovieClip(); addChild(mask_mc) //assign the mask to the movieclip it should 'cover' mc1.mask = mask_mc; With a brush(cursor) the player wipes of the dirt ( actualy setting the fill from the mask to transparent so the clean window appears) //add event listeners for the 'brush' brush_mc.addEventListener(MouseEvent.MOUSE_DOWN,brushDown); brush_mc.addEventListener(MouseEvent.MOUSE_UP,brushUp); //function to drag the brush over the mask function brushDown(dragging:MouseEvent):void{ dragging.currentTarget.startDrag(); MovieClip(dragging.currentTarget).addEventListener(Event.ENTER_FRAME,erase) ; mask_mc.graphics.moveTo(brush_mc.x,brush_mc.y); } //function to stop dragging the brush over the mask function brushUp(dragging:MouseEvent):void{ dragging.currentTarget.stopDrag(); MovieClip(dragging.currentTarget).removeEventListener(Event.ENTER_FRAME,erase); } //fill the mask with transparant pixels so the movieclip turns visible function erase(e:Event):void{ with(mask_mc.graphics){ beginFill(0x000000); drawRect(brush_mc.x,brush_mc.y,brush_mc.width,brush_mc.height); endFill(); } }

    Read the article

  • Be liberal in what you accept... or not?

    - by Matthieu M.
    [Disclaimer: this question is subjective, but I would prefer getting answers backed by facts and/or reflexions] I think everyone knows about the Robustness Principle, usually summed up by Postel's Law: Be conservative in what you send; be liberal in what you accept. I would agree that for the design of a widespread communication protocol this may make sense (with the goal of allowing easy extension), however I have always thought that its application to HTML / CSS was a total failure, each browser implementing its own silent tweak detection / behavior, making it near impossible to obtain a consistent rendering across multiple browsers. I do notice though that there the RFC of the TCP protocol deems "Silent Failure" acceptable unless otherwise specified... which is an interesting behavior, to say the least. There are other examples of the application of this principle throughout the software trade that regularly pop up because they have bitten developpers, from the top off my head: Javascript semi-colon insertion C (silent) builtin conversions (which would not be so bad if it did not truncated...) and there are tools to help implement "smart" behavior: name matching phonetic algorithms (Double Metaphone) string distances algorithms (Levenshtein distance) However I find that this approach, while it may be helpful when dealing with non-technical users or to help users in the process of error recovery, has some drawbacks when applied to the design of library/classes interface: it is somewhat subjective whether the algorithm guesses "right", and thus it may go against the Principle of Least Astonishment it makes the implementation more difficult, thus more chances to introduce bugs (violation of YAGNI ?) it makes the behavior more susceptible to change, as any modification of the "guess" routine may break old programs, nearly excluding refactoring possibilities... from the start! And this is what led me to the following question: When designing an interface (library, class, message), do you lean toward the robustness principle or not ? I myself tend to be quite strict, using extensive input validation on my interfaces, and I was wondering if I was perhaps too strict.

    Read the article

  • Are Intel compilers really better than Microsoft ones?

    - by Rocket Surgeon
    Years ago I was surprised when discovered that Intel sells Studio compatible compilers. I tried it in particular for C/C++ as well as fantastic diagnostic tools. But the code was simply not that computationally intensive to notice the difference. The only impression was: did Intel really did it for me just now, Wow, amazing tools with nanoseconds resolution, unbeleivable. But the trial ended and team never seriously considered a purchase. From your experience, if license cost does not matter, which vendor is a winner ? It is not broad or vague question or attemt to spark a holy war. This sort of question about 2 very visible tools. Nobody likes when tools have any mysteries or surprises. And choices between best and best are always the pain. I also understand the "grass greener" argument. I want to hear all "what ifs" stories. What if Intel just locally optimizes it for the chip stepping of the month, and not every hardware target will actually work as well as Microsoft compiled ? What if AMD hardware is the target and everything will slow down for no reason ? Or on other hand, what if Intel's hardware has so many unnoticable opportunities, that Microsoft compiler writers are too slow to adopt and never implement in the compiler ? What if both are the same exactly, actually a single codebase just wrapped into 2 different boxes and licensed to both vendors by some 3rd party shop? And so on. But someone knows some answers.

    Read the article

  • Forum engine with full LDAP integration [closed]

    - by Andrian Nord
    We are looking for forum engine which may actually maintain user data into LDAP, maybe via mods. Core point is about ability to maintain the data, i.e. all user profile settings, like nickname, password, email, avatar, birthday and others (preferably configurable). One example of good ldap integration, level of which I'm expecting, is drupal's ldap integration, which allows to map any user's attribute into ldap and keeps it in sync with database. Year ago I've done a small research over existing Free&FOSS engines and find out few forum engines with LDAP integration, namely SFM, phpBB and something else. The most maintained solution were provided by phpBB3, which supports LDAP integration out-of-box, but it is unable to sync data with changes in LDAP server made by other software. Actually it wasn't even propagating changes back, I'm not saying about ability to map additional attributes (other than name/password/email). Also, I haven't found any forum with architecture which have proper abstraction over user settings, thus I doubt that this engines (including phpBB) are possible to mod such functionality without introducing dramatic changes into core codebase. More recent research showed that even some commercial software, like IPB is unable to keep it's database synced with LDAP directory and map additional attributes. In other words, all support I've seen so far is simple user creation upon first user's login, which is not good for us, as forum is not primary site and should not maintain it's own users base (to reduce risk of possible collisions). LDAP import is required due to many other services (ftp, email, jabber, drupal site) using same users base. Currently we have forum embedded into Drupal site, but we are unsatisfied with it's features. BTW, we are using Linux and this is not duplicate of this question, as it's author seems to be satisfied with behaviour described above. So, my question is: Are there any (preferably FOSS&free) forum engines that may import, export, keep in sync, or otherwise integrade with LDAP user database (preferably with ability to map additional fields to ldap attributes)?

    Read the article

  • How is"cloud computing"different from "client-server"?

    - by BellevueBob
    Watching a CEO for a new "cloud computing" company describe his company on a finance TV program today, he said something like "Cloud computing is superior to old-fashioned client-server computing". Now I'm confused. Can someone please explain what "cloud computing" means in contrast to client-server? As far as I understand it, cloud computing is more of a network services model, such that I do not own or maintain the physical hardware. The "cloud" is all the back-end stuff. But I still might have an application that communicates with that "cloud" environment. And if I run a web site presents a form that a user fills out, pushes a button on the page, and returns some report that was generated by the web server, isn't that the same as "cloud" computing? And would you not consider my web browser as the "client"? Please note my question is specific to the concept of "cloud computing" with respect to "client-server". Sorry if this is an inappropriate question for this site; it's the one closest in the Stack universe and this is my first time here. I'm an old timer, programming since mainframe days in the late 70's.

    Read the article

  • How can I downgrade a system that accidentally had backports installed?

    - by Glyph
    I installed a fresh Ubuntu system. Somehow - possibly through my own error - the backports repository got enabled. Then I did several upgrades. I noticed that this happened when networking suddenly stopped working, "Network Settings" now has an "(alpha)" in the title bar, "System Settings" ? "Network" now displays an error dialog saying "The system network services are not compatible with this version". Now, I've disabled the backports repository, and I'd like to restore my system to its previously-functional state. My question is twofold: How do I determine which packages were installed from backports? Can I automatically re-install all those packages (and purge their configuration) to get back to a sensible state? If the answer to 2 is "no" I can probably manually purge some things and reinstall, but it would be nice to have it handled automatically. Update: It wasn't an update that broke the network; it was apt-get install indicator-network, which installed something called "connman" and removed network-manager and network-manager-gnome. Nevertheless I am leaving the question up, since I am still interested in how I can purge packages from a particular source after accidentally adding that source, and how I can determine which packages were installed from where.

    Read the article

  • Delphi Client-Server Application using Firebird 2.5 error

    - by Japie Bosman
    I have got a lengthy question to ask. First of all Im still very new when it comes to Delphi programming and my experience has beem mostly developing small single user database applications using ADO and an Access database. I need to take the transition now to a client server application and this is where the problem starts. I decided to use Firebird 2.5 embeded as my database, as it is open source, and it is can be used with the interbase components in Delphi and that multiple clients can access the database simultanously. So I followed the interbase tutorial in Delphi. I managed to connect the client to the server and see the data in the example (While both are running on my pc), but when i tried to move the client to another pc, keeping the server on mine and running it to see if I can connect to the server it gave me the following error. Exception EIdSocketError in module clientDemo.exe at 0029DCAC. Socket Error # 10061 Connection refused. I understand that this might be because the host is defined as localhost in the client. But here is my first question. In the TSQLConncetion you can set die hostname under Driver-Hostname. The thing I want to know is how do you do this at run time, as I cannot get the property when I try and make an edit box to allow the user to enter the value and then set it via code like for example: SQLConncetion1.Driver.Hostname := edtHost.text; The thing is there is not such property to set, so how do you set the hostname at run time? Im using Delphi XE2 There is still a lot of questions to come especially when it comes to deployment, but I will take this piece by piece and I appreciate the advice.

    Read the article

  • YouTube: Chrome Dev Tools Integration with NetBeans IDE!

    - by Geertjan
    Some time ago my colleague David Konecny discussed the question "What works better for you? NetBeans IDE or Chrome Developer Tools?". It's a good read. David highlights the point that it's not a question of either/or but both, since the two tools are like the apple/pear dichotmoy. However, good news! The two worlds are not divided in NetBeans IDE 7.4. Changes you make in Chrome Developer Tools (CDT) are automatically persisted to the related files in NetBeans IDE, as you can see in a new YouTube clip I made today. The new integration of CDT with NetBeans IDE has been mentioned in the NetBeans IDE 7.4 New & Noteworthy, while on Twitter this was sighted yesterday: Watch the movie above and within 5 minutes you too will see the simplicity and power of CDT integration with NetBeans IDE. In other news. I consider the above to be my favorite (though it's a tough choice, since there are so many new features in NetBeans IDE 7.4) new feature, for the article "What is your favorite new NetBeans IDE 7.4 feature?"

    Read the article

  • Implementing the transport layer for a SIP UAC

    - by Jonathan Henson
    I have a somewhat simple, but specific, question about implementing the transport layer for a SIP UAC. Do I expect the response to a request on the same socket that I sent the request on, or do I let the UDP or TCP listener pick up the response and then route it to the correct transaction from there? The RFC does not seem to say anything on the matter. It seems that especially using UDP, which is connection-less, that I should just let the listeners pick up the response, but that seems sort of counter intuitive. Particularly, I have seen plenty of UAC implementations which do not depend on having a Listener in the transport layer. Also, most implementations I have looked at do not have the UAS receiving loop responding on the socket at all. This would tend to indicate that the client should not be expecting a reply on the socket that it sent the request on. For clarification: Suppose my transport layer consists of the following elements: TCPClient (Sends Requests for a UAC via TCP) UDPClient (Sends Requests for a UAC vid UDP) TCPSever (Loop receiving Requests and dispatching to transaction layer via TCP) UDPServer (Loop receiving Requests and dispatching to transaction layer via UDP) Obviously, the *Client sends my Requests. The question is, what receives the Response? The *Client waiting on a recv or recvfrom call on the socket it used to send the request, or the *Server? Conversely, the *Server receives my requests, What sends the Response? The *Client? doesn't this break the roles of each member a bit?

    Read the article

  • Documenting your database with Visual Studio 2012 SSDT tools

    - by krislankford
    The title of this post is interesting and something I am wishing you and your colleagues had a better way to do. I understand as I am asked this question frequently. I couple of weeks ago I was asked the same question by a customer who documents their database using the ApexSQL Doc tools which uses the extended properties on objects to create automated documentation. I thought that was super interesting and went down the path to see how we could could support the creation of this documentation while leveraging the Visual Studio 2012 SSDT Tools. What I found is was rather intriguing. There is a property called “Description” on all objects in the SSDT tools. This property is rather subtle and I am betting overlooked. To be honest, this property has probably been there for a while and I just never discovered it. Adding text to this '”Description” property it allows Visual Studio to create the commands for the extended properties directly to your schema which should be version controlled. As I did more digging there seemed to be extended properties at every level in the SQL database objects. This fills some rather challenging gaps and allows organizations to manage SQL Schema using the Visual Studio SQL database tools while allowing a way to automatically document the database. This will also work in the automation of the creation and alter scripts that can be generated as part f an automated build system. Now we essentially get a way to store, build and document the database in a nice little ALM package. Happy Coding!

    Read the article

  • Storing data for use on Android and Windows Applications

    - by Andy Mepham
    I posted this last night on StackOverflow and was advised to move it over to StackExchange, thank you for taking a moment to look at my question. I'm developing a project proposal for my final year project at University and as I aim to use programming languages I am currently not too familiar with I'm looking for some guidance - I can't include details of my project but hopefully you will understand what I'm after. I'm going to be creating an Android application (in Java) and a Windows Application (in C#) that will ideally access, query and update a remotely hosted Database or set of XML files (this would most likely be over the Internet). I've done some looking around the internet and SQLite seems like a safe-bet for cross-platform manipulation of the database; however I would like to keep the system as lightweight as possible and I'm wondering whether XML files may provide a better alternative? Anyone out there that has experience using SQLite and/or remotely hosted XML for the purposes of Android and/or C# development that could point me in the right direction? If there is an alternative solution other than those I have mentioned I would be interested to hear about them too. Thank you for taking the time to read my question. Edit: The purpose of this application is for a small scale business, the data source would not need to be updated by more than one source but may be view from multiple sources (i.e. through multiple phones and a desktop PC). The database wouldn't be updating masses of data at a time (most likely single rows of a few tables at the most).

    Read the article

  • Improved Customer Experience, but at what Cost?

    - by Tony Berk
    We can all probably agree that improving your customers' experience is a good thing. But a key question many people are asking is will it help your organization and, in particular, what are the financial benefits?That's a good question, especially when companies ARE experiencing phenomenal return on investment (ROI). Of course, there are many factors that impact ROI or other measures of success, but we'd like to share some success stories as examples of customer experience in action and delivering positive results. If you would like to learn more about the economics of customer experience, see Brian Curran's presentation at the Oracle Customer Experience Summit last month. In this series of blog posts, we'll share actual customer stories. Today's example is Dell, which uses Oracle Real-Time Decisions (RTD) and Siebel CRM as part of their customer experience portfolio to better understand their customers' needs and wants and provide consistent interactions. Regular readers of this blog are probably familiar with Siebel, but RTD may be new to many of you. RTD is a complete decision management solution that delivers real-time decisions and recommendations and automatically renders decisions within a business process to create tailored messaging for every customer interaction.What does that mean? In the video below, Dell describes how customer experience is important not just for one interaction channel, but across all "vehicles." RTD is helping Dell understand customer behavior and communicate with the customer in a more relevant manner, across all communication  or interaction channels including sales and service call centers, email marketing and online. Dell continues to expand use of RTD because the benefits are showing up in sales, service and marketing results including 19% increase in close rates, faster issue resolution and 40% improvement in revenue per click in email marketing. Click here, to learn more about Oracle Customer Experience and stay tuned for more customer spotlights.

    Read the article

  • Ray Tracing concers: Efficient Data Structure and Photon Mapping

    - by Grieverheart
    I'm trying to build a simple ray tracer for specific target scenes. An example of such scene can be seen below. I'm concerned as to what accelerating data structure would be most efficient in this case since all objects are touching but on the other hand, the scene is uniform. The objects in my ray tracer are stored as a collection of triangles, thus I also have access to individual triangles. Also, when trying to find the bounding box of the scene, how should infinite planes be handled? Should one instead use the viewing frustum to calculate the bounding box? A few other questions I have are about photon mapping. I've read the original paper by Jensen and many more material. In the compact data structure for the photon they introduce, they store photon power as 4 chars, which from my understanding is 3 chars for color and 1 for flux. But I don't understand how 1 char is enough to store a flux of the order of 1/n, where n is the number of photons (I'm also a bit confused about flux vs power). The other question about photon mapping is, if it would be more efficient in my case to store photons per object (or even per Object's triangle) instead of using a balanced kd-tree. Also, same question about bounding box of the scene but for photon mapping. How should one find a bounding box from the pov of the light when infinite planes are involved?

    Read the article

  • Many small scripts, one repository or multiple?

    - by The Jug
    A co-worker and myself have run into an issue that we have multiple opinions on. Currently we have a git repository that we are keeping all of our cronjobs in. There are about 20 crons and they are not really related except for the fact that they are all small python scripts and essential for some activity. We are using a fabric.py file to deploy and a requirements.txt file to manage requirements for all of the scripts. Our issue is basically, do we keep all of these scripts in one git repository or should we be separating them out into their own repositories? By keeping them in one repository it is easier to deploy them onto one server. We can use just one cron file for all the scripts. However this feels wrong, as the 20 cronjobs are not logically related. Additionally, when using one requirements.txt file for all the scripts, it's hard to figure out what the dependencies are for a particular script and they all have to use the same versions of packages. We could separate all of the scripts out into their own repositories but this creates 20 different repositories that need to be remembered and dealt with. Most of these scripts are not very large and that solution seems to be overkill. A related question is, do we use one big crontab file for all cronjobs, or a separate file for each? If each has their own, how does one crontab's installation avoid overwriting the other 19? This also seems like a pain as there would then by 20 different cron files to keep track of. In short, our main question and issue is do we keep them all closely bundled as one repository or do we separate them out into their own repository with their own requirements.txt and fabfile.py? We feel like we're also probably looking over some really simple solution. Is there an easier way to deal with this issue?

    Read the article

  • Grading an algorithm: Readability vs. Compactness

    - by amiregelz
    Consider the following question in a test \ interview: Implement the strcpy() function in C: void strcpy(char *destination, char *source); The strcpy function copies the C string pointed by source into the array pointed by destination, including the terminating null character. Assume that the size of the array pointed by destination is long enough to contain the same C string as source, and does not overlap in memory with source. Say you were the tester, how would you grade the following answers to this question? 1) void strcpy(char *destination, char *source) { while (*source != '\0') { *destination = *source; source++; destionation++; } *destionation = *source; } 2) void strcpy(char *destination, char *source) { while (*(destination++) = *(source++)) ; } The first implementation is straightforward - it is readable and programmer-friendly. The second implementation is shorter (one line of code) but less programmer-friendly; it's not so easy to understand the way this code is working, and if you're not familiar with the priorities in this code then it's a problem. I'm wondering if the first answer would show more complexity and more advanced thinking, in the tester's eyes, even though both algorithms behave the same, and although code readability is considered to be more important than code compactness. It seems to me that since making an algorithm this compact is more difficult to implement, it will show a higher level of thinking as an answer in a test. However, it is also possible that a tester would consider the first answer not good because it's not readable. I would also like to mention that this is not specific to this example, but general for code readability vs. compactness when implementing an algorithm, specifically in tests \ interviews.

    Read the article

  • Google analytics - drop in traffic

    - by Andy
    Bit of a general question here. We are in the process of converting a number of our clients from older web sites to new ones. The problem we are getting, and sorry for being so general here, is we are getting a sharp decline in traffic as reported on Google Analytics. It's not a gradual decline, it seems to hit almost as soon as the new site goes live. I've just got a few questions to see if there is something we are doing wrong: a) We are using the same analytics accounts going from old to new site. Is this a bad idea? b) The actual analytics code is integrated into the pages using a server-side include. IS this a bad idea? c) We structure our sites differently to our old site. IE. The old sites would pretty must have all the web pages in the root directory, and hyperlinks would be linked to the page files: EG. <a href="somepage.aspx">Link</a> Our new sites now have a directory structure that pretty much reflects the navigation structure, and hyper links link to the pages directory instead of the actual page: EG. <a href="/new-items/shoes/">New shoes</a> Is this a bad idea. I'm really searching for a needle in a haystack here. Would appriciate any help or advice as to why we are getting such a sharp and sudden drop in traffic. Again, so this is such a general question. Thanks in advance.

    Read the article

  • Ubuntu unstable, showing awkward behavior

    - by Christophe De Troyer
    Let me start off by saying that this problem can't be described in such a way that allows me to find other topics, which have some relevance to this problem. That's why I created this question. In case this question might have been asked before, I apologize. So what is the problem: My computer (Intel Core I5 2500K - with HD3000 graphics -, 6 gb DDR, 1 SSD, 3 HDD's and an Asus P8Z68 mobo) runs Windows on the SDD. But I decided to give Ubuntu a chance to be my daily OS for basic needs since it's open source and I find it a handicap of not being able to work with it. I decided to run the windows installer and install Ubuntu 12.04 to my 320 Gb hdd which was not being used in my computer. After installing it and booting it, it worked great! I spent the rest of the day/night using it, and falling for it. Great, today I booted Ubuntu (I had the choice in the bootloader as I expected). It asked me for my login and it started logging in. Now, after a few (literally) minutes of letting it "boot" I tried determining the cause of this. What I've figured out so far: When I left click on my desktop it freezes completely for a few seconds I have something like tearing in the left side menu (in games, you know) when I move my mouse around It runs well when just hovering around with my mouse, but from the point I click on something it freezes. What have I tried? I ran a HD Tune diagnostic on the HDD but the performance seems to be very close to the stock values, so I'm taking it as a good HDD. I'm trying to get to the drivers update panel for Ubuntu, but with the state it's in, it's taking a lot of time.. Could anyone point me in a direction for troubleshooting this? I'm not really a noob at all, just when using Linux.. :) Thanks in advance! Christophe,

    Read the article

  • Making The EBS Upgrade From 11.5.10 Easier - Part III

    - by Annemarie Provisero
    ADVISOR WEBCAST: Making The EBS Upgrade From 11.5.10 Easier - Part III PRODUCT FAMILY: E-Business Suite July 19, 2011 at 8 am PT, 9 am MT, 11 am ET This one-hour session is recommended for technical users who are responsible for upgrading their E-Business Suite applications from Release 11.5.10 to Release 12.1.x. As you begin your upgrade process, there are a number of tools available to assist you in a successful upgrade. A successful upgrade requires careful planning, correct upgrade processing, detailed testing, and user (re)training prior to upgrade. Over three sessions we will discuss the tools that you can use to assist in your upgrade tasks. These tools are available to you via My Oracle Support and as part of the E-Business Suite product offerings. In this third session, we’ll cover the Best Practices for Using The Upgrade Tools. Additionally, this session includes an extended question and answer period. In the first part of the three-session series, we covered the following topics: Overview of Tools Available for Upgrading Upgrade versus Re-implementing Upgrade Community Upgrade Product Information Center Page Detailed Look at Upgrade Advisor In the second session, we covered the following topics: Recap of Part I Detailed Look at Maintenance Wizard Detailed Look at Patch Wizard A replay of those sessions is available via Note 740964.1, Advisor Webcast Archive. A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support. For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • My Visual Studio Demo Video Link disappeared &ndash; How do I get it back?

    - by Tarun Arora
    ***Special thanks to Adam Cogan for asking this question and to Andrew Bragdon for answering this question on the ALM Champs list.*** 1. Problem – The link to demo videos will disappear once you have watched the video Learning Visual Studio has become easier than ever with the Visual Studio How to Videos hosted inside of Visual Studio showing up in the context of the task you are trying to achieve. For instance when you click code review in team explorer you can see the link “Streaming Video: Using Code Review to improve quality” when you click this link the video stream is delivered to you right with in Visual Studio. Next time you run Visual Studio you will notice that the home page has a check mark in the video “Using Code Review to improve quality”. If you navigate to code review in the myWork hub in the team explorer, you will notice that the link “Streaming Video: Using Code Review to improve quality” does not show up any more.         2. Solution – How to get the Demo Videos link back Warning: Editing the registry can lead to serious problems if not done correctly.  Always backup your registry before editing. This solution is neither suggested nor supported by Microsoft. Type regedit on the run command prompt to open the Registry editor Navigate to the path Computer\HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\11.0\UltimateStartPage\VideoState and notice the newly created folder “TeamExplorer.CodeReview”, notice the key Watched is set to 1.         Change the value of the key ‘Watched’ to 0 Restart Visual Studio and Navigate to Code Review in myWork hub and voila, the link to stream the video is back!            Watch and enjoy the Demo videos to your hearts content!

    Read the article

  • Biggest mistake you've ever made

    - by Rogue Coder
    Similar to the question I read on Server Fault, what is the biggest mistake you've ever made in an IT related position. Some examples from friends: I needed to do some work on a production site so I decided to copy over the live database to the beta site. Pretty standard, but when I went to the beta site it was still pulling out-of-date info. OOPS! I had copied the beta database over to the live site! Thank god for backups. And for me, I created a form for an event that was to be held during a specific time range. Participants would fill out the form for a chance to win, and we would send the event organizers a CSV from the database. I went into the database, and found ONLY 1 ENTRY, MINE. Upon investigating, it appears as though I forgot an auto increment key, and because of the server setup there was no way to recover the lost data. I am aware this question is similar to ones on Stack Overflow but the ones I found seemed to receive generic answers instead of actual stories :) What is the biggest coding error/mistake ever…

    Read the article

  • 2D Rectangle Collision Response with Multiple Rectangles

    - by Justin Skiles
    Similar to: Collision rectangle response I have a level made up of tiles where the edges of the level are made up of collidable rectangles. The player's collision box is represented by a rectangle as well. The player can move in 8 directions. The player's velocity is equal in X and Y directions and constant. Each update, I am checking the player's collision against all tiles that are a certain distance away. When the player collides with a rectangle, I am finding the intersection depth and resolving along the most shallow axis followed by the other axis. This resolution happens for both axes simultaneously. See below for two examples of situations where I am having trouble. Moving up-left against the left wall In the scenario below, the player is colliding with two tiles. The tile intersection depth is equal on both axes for the top tile and more shallow in the X axis for the middle tile. Because the player is moving up the wall, the player should slide in an upward direction along the wall. This works properly as long as the rectangle with the more shallow depth is evaluated first. If the equal intersection depth rectangle is evaluated first, there is a chance the player becomes stuck. Moving up-left against the top wall Here is an identical scenario with the exception that the collision is with the top wall. The same problem occurs at the corners when intersection depth is equal for both axes. I guess my overall question is: How can I ensure that collision response occurs on tiles that have non-equal intersection depth before tiles that have equal intersection depth in order to get around the weirdness that occurs at these corners. Sean's answer in the linked question was good, but his solution required having different velocity components in a certain direction. My situation has equal velocities, so there's no good way to tell which direction to resolve at corners. I hope I have made my explanation clear.

    Read the article

< Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >