Search Results

Search found 10563 results on 423 pages for 'complex numbers'.

Page 228/423 | < Previous Page | 224 225 226 227 228 229 230 231 232 233 234 235  | Next Page >

  • How to visualize the design of a program in order to communicate it to others

    - by Joris Meys
    I am (re-)designing some packages for R, and I am currently working out the necessary functions, objects, both internal and for the interface with the user. I have documented the individual functions and objects. So I have the description of all the little parts. Now I need to give an overview of how the parts fit together. The scheme of the motor so to say. I've started with making some flowchart-like graphs in Visio, but that quickly became a clumsy and useless collection of boxes, arrrows and-what-not. So hence the question: Is there specific software you can use for vizualizing the design of your program If so, care to share some tips on how to do this most efficiently If not, how do other designers create the scheme of their programs and communicate that to others? Edit: I am NOT asking how to explain complex processes to somebody, nor asking how to illustrate programming logic. I am asking how to communicate the design of a program/package, i.e.: the objects (with key features and representation if possible) the related functions (with arguments and function if possible) the interrelation between the functions at the interface and the internal functions (I'm talking about an extension package for a scripting language, keep that in mind) So something like this : But better. This is (part of) the interrelations between functions in the old package that I'm now redesigning for obvious reasons :-) PS : I made that graph myself, using code extraction tools on the source and feeding the interrelation matrix to yEd Graph Editor.

    Read the article

  • Calculating 3d rotation around random axis

    - by mitim
    This is actually a solved problem, but I want to understand why my original method didn't work (hoping someone with more knowledge can explain). (Keep in mind, I've not very experienced in 3d programming, having only played with the very basic for a little bit...nor do I have a lot of mathematical experience in this area). I wanted to animate a point rotating around another point at a random axis, say a 45 degrees along the y axis (think of an electron around a nucleus). I know how to rotate using the transform matrix along the X, Y and Z axis, but not an arbitrary (45 degree) axis. Eventually after some research I found a suggestion: Rotate the point by -45 degrees around the Z so that it is aligned. Then rotate by some increment along the Y axis, then rotate it back +45 degrees for every frame tick. While this certainly worked, I felt that it seemed to be more work then needed (too many method calls, math, etc) and would probably be pretty slow at runtime with many points to deal with. I thought maybe it was possible to combine all the rotation matrixes involve into 1 rotation matrix and use that as a single operation. Something like: [ cos(-45) -sin(-45) 0] [ sin(-45) cos(-45) 0] rotate by -45 along Z [ 0 0 1] multiply by [ cos(2) 0 -sin(2)] [ 0 1 0 ] rotate by 2 degrees (my increment) along Y [ sin(2) 0 cos(2)] then multiply that result by (in that order) [ cos(45) -sin(45) 0] [ sin(45) cos(45) 0] rotate by 45 along Z [ 0 0 1] I get 1 mess of a matrix of numbers (since I was working with unknowns and 2 angles), but I felt like it should work. It did not and I found a solution on wiki using a different matirx, but that is something else. I'm not sure if maybe I made an error in multiplying, but my question is: this is actually a viable way to solve the problem, to take all the separate transformations, combine them via multiplying, then use that or not?

    Read the article

  • How to Transfer All Your Information to a New PS3

    - by Justin Garrison
    The PlayStation 3 now costs half the price, has double the storage, and uses half the power. If you need another reason to upgrade, Sony also makes it easy to transfer all of your information to a new console. Transferring all of your games, data, and settings is easier than ever, and all you need is an ethernet cable. Read on as we walk you through the whole process of setting up your new PS3 and wiping all your information off the old one. Latest Features How-To Geek ETC Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions Hack Apart a Highlighter to Create UV-Reactive Flowers [Science] Add a “Textmate Style” Lightweight Text Editor with Dropbox Syncing to Chrome and Iron Is the Forcefield Really On or Not? [Star Wars Parody Video] Google Updates Picasa Web Albums; Emphasis on Sharing and Showcasing Uwall.tv Turns YouTube into a Video Jukebox Early Morning Sunrise at the Beach Wallpaper

    Read the article

  • Access functions from user control without events?

    - by BornToCode
    I have an application made with usercontrols and a function on main form that removes the previous user controls and shows the desired usercontrol centered and tweaked: public void DisplayControl(UserControl uControl) I find it much easier to make this function static or access this function by reference from the user control, like this: MainForm mainform_functions = (MainForm)Parent; mainform_functions.DisplayControl(uc_a); You probably think it's a sin to access a function in mainform, from the usercontrol, however, raising an event seems much more complex in such case - I'll give a simple example - let's say I raise an event from usercontrol_A to show usercontrol_B on mainform, so I write this: uc_a.show_uc_b+= (s,e) => { usercontrol_B uc_b = new usercontrol_B(); DisplayControl(uc_b); }; Now what if I want usercontrol_B to also have an event to show usercontrol_C? now it would look like this: uc_a.show_uc_b+= (s,e) => { usercontrol_B uc_b = new usercontrol_B(); DisplayControl(uc_b); uc_b.show_uc_c += (s2,e2) => {usercontrol_C uc_c = new usercontrol_C(); DisplayControl(uc_c);} }; THIS LOOKS AWFUL! The code is much simpler and readable when you actually access the function from the usercontrol itself, therefore I came to the conclusion that in such case it's not so terrible if I break the rules and not use events for such general function, I also think that a readable usercontrol that you need to make small adjustments for another app is preferable than a 100% 'generic' one which makes my code look like a pile of mud. What is your opinion? Am I mistaken?

    Read the article

  • Can Clojure's thread-based agents handle c10k performance?

    - by elliot42
    I'm writing a c10k-style service and am trying to evaluate Clojure's performance. Can Clojure agents handle this scale of concurrency with its thread-based agents? Other high performance systems seem to be moving towards async-IO/events/greenlets, albeit at a seemingly higher complexity cost. Suppose there are 10,000 clients connected, sending messages that should be appended to 1,000 local files--the Clojure service is trying to write to as many files in parallel as it can, while not letting any two separate requests mangle the same single file by writing at the same time. Clojure agents are extremely elegant conceptually--they would allow separate files to be written independently and asynchronously, while serializing (in the database sense) multiple requests to write to the same file. My understanding is that agents work by starting a thread for each operation (assume we are IO-bound and using send-off)--so in this case is it correct that it would start 1,000+ threads? Can current-day systems handle this number of threads efficiently? Most of them should be IO-bound and sleeping most of the time, but I presume there would still be a context-switching penalty that is theoretically higher than async-IO/event-based systems (e.g. Erlang, Go, node.js). If the Clojure solution can handle the performance, it seems like the most elegant thing to code. However if it can't handle the performance then something like Erlang or Go's lightweight processes might be preferable, since they are designed to have tens of thousands of them spawned at once, and are only moderately more complex to implement. Has anyone approached this problem in Clojure or compared to these other platforms? (Thanks for your thoughts!)

    Read the article

  • Should a project start with the client or the server?

    - by MadBurn
    Pretty simple question with a complex answer. Should a project start with the client or the server, and why? Where should a single programmer start a client/server project? What are the best practices and what are the reasons behind them? If you can't think of any, what reasons do you use to justify why you would choose to start one before the other? Personally, I'm asking this question because I'm finishing up specs for a project I will be doing for myself on the side for fun. But now that I'm finishing this phase, I'm wondering "ok, now where do I begin?" Since I've never done a project like this by myself, I'm not sure where I should start. In this project, my server will be doing all the heavy lifting and the client will just be sending updates, getting information from the server, and displaying it. But, I don't want that to sway the answer as I'm looking for more of an in depth and less specific answer that would apply to any project I begin in the future.

    Read the article

  • Generalist Languages: Dying or Alive and Well?

    - by dsimcha
    Around here, it seems like there's somewhat of a consensus that generalist programming languages (that try to be good at everything, support multiple paradigms, support both very high- and very low-level programming), etc. are a bad idea, and that it's better to pick the right tool for the job and use lots of different languages. I see three major areas where this is flawed: Interfacing multiple languages is always at least a source of friction and is sometimes practically impossible. How severe a problem this is depends on how fine-grained the interfacing is. Near the boundary between the two languages, though, you're basically limited to the intersection of their features, and you have to care about things like binary interfaces that you usually wouldn't. Passing complex data structures (i.e. not just primitives and arrays of primitives) between languages is almost always a hassle. Furthermore, shifting between different syntaxes, different conventions, etc. can be confusing and annoying, though this is a fairly minor complaint. Requirements are never set in stone. I hate picking a language thinking it's the right tool for the job, then realizing that, when some new requirement surfaces, it's actually a terrible choice for that requirement. This has happened to me several times before, usually when working with languages that are very slow, very domain specific and/or has very poor concurrency/parallelism support. When you program in a language for a while, you start to build up a personal toolbox of small utility functions/classes/programs. The value of these goes drastically down if you're forced to use a different language than the one you've accumulated all this code in. What am I missing here? Why shouldn't more focus be placed on generalist languages? Are generalist languages as a category dying or alive and well?

    Read the article

  • PPF Savings Interest Rate Increased To 8.6% From 8% [India]

    - by Gopinath
    Here is some good news to small Indian investors who save money in Public Provident Fund(PPF) accounts operated in Post Offices and few nationalized banks – returns of your PPF savings are going to increase. Indian government has decided to increase the rate of interest paid to customers from 8% to 8.6%. To put it in numbers, if you have a 2,00,000 of savings in PPF account they are going give returns of 17,200/- per annum compared to 16,000/- returns at 8%. Also the the maximum cap on the investments in to PPF account per annum is increased to 1,00,000 from 70,000/-. PPF is one of the safest debt investments that gives very decent returns, but if you are a salaried employee with PF account then consider investing in Voluntary PF(VPF) instead of PPF as VPF returns are higher than PPF. CC image credit: Dave Dugdale. This article titled,PPF Savings Interest Rate Increased To 8.6% From 8% [India], was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Must developers understand the business domain or should the specification be sufficient?

    - by Jerome C.
    I work for a company for which the domain is really difficult to understand because it is high technology in electronics, but this is applicable to any software development in a complex domain. The application that I work on displays a lot of information, charts, and metrics which are difficult to understand without experience in the domain. The developer uses a specification to describe what the software must do, such as specifing that a particular chart must display this kind of metrics and this metric is the following arithmetic formula. This way, the developer doesn't really understand the business and what/why he is doing this task. This can be OK if specification is really detailled but when it isn't or when the author has forgotten a use case, this is quite hard for the developer to find a solution. At the other hand, training every developer to all the business aspects can be very long and difficult. Should we give more importance to detailled specification (but as we know, perfect specification does not exist) or should we train all the developers to understand the business domain? EDIT: keep in mind in your answer that the company could used external developpers.

    Read the article

  • Weekly Cloud Roundup 2012-15

    - by Alan Smith
    Filtering the informative, insightful and quirky from the fire hose of cloud-based hype. Irving Wladawsky-Berger provides some great insight into The Complex Transition to the Cloud, sharing his views on the slow adoption of cloud computing in organizations. “…a prediction by the research firm Gartner that while cloud computing will continue to grow at almost 20 percent a year, it will account for less than 5 percent of totally IT spending in 2015.” With a more positive mindset, Balaji Viswanathan highlights 7 Salient Trends and Directions in Cloud Computing that could be shaping the industry over the next few years. Cloud computing also looks to save energy “A small business with 100 users that moved the Microsoft applications to the cloud could cut energy use and carbon emissions by 90%. Large organizations with 10,000 users saw a 30% reduction.” More on that story here. The expansion of Windows Azure has been in the news with the announcement of “East US” and “West US” datacenters; this was covered by Visual Studio Magazine and Mary-Jo, and according to thenextweb.com Microsoft are also building $112 million data center in Wyoming. The cloud price war is still in full swing with Joe Panettieri discussing the pricing of Windows Azure and Office 365 and asking How Low Can It Go?

    Read the article

  • What is Stackify?

    - by Matt Watson
    You have developers, applications, and servers. Stackify makes sure that they are all working efficiently. Our mission is to give developers the integrated tools they need to better troubleshoot and monitor the applications they create and the servers that they run on. Traditional IT operations tools are designed for network and system administrators. Developers commonly spend 30% of their time working with IT Operations remediating application service problems. Developers currently lack tools to efficiently support the applications they create. Stackify delivers the application support functionality that developers need:View application deployment locations, versions, and historyBrowse files on servers to ensure proper deploymentsAccess configuration and log files on serversRemotely restart windows services, scheduled tasks, and web applicationsBasic server monitoring and alertsCollects all application exceptions to a centralized pointLog and report on custom applications eventsStackify is building an integrated DevOps solution delivered from the cloud designed to meet the needs of developers but also help unify the working relationship with IT operations teams and existing security roles. Our goal is to help unify the interaction between developers and IT operations. Stackify allows both teams to have visibility that they never had before  to solve complex application service issues easier and faster. Stackify’s CEO and CTO both have experience managing very large and high growth software development teams. That experience is driving our design in Stackify to deliver the integrated tools we always wished we had, the next generation of development operations tools.

    Read the article

  • Strange and erratic transformations when using OpenGL VBOs to render scene

    - by janoside
    I have an existing iOS game with fairly simple scenes (all textured quads) and I'm using Apple's "Texture2D" class. I'm trying to convert this class to use VBOs since the vertices of my objects basically never change so I may as well not re-create them for every object every frame. I have the scene rendering using VBOs but the sizes and orientations of all rendered objects are strange and erratic - though locations seem generally correct. I've been toying with this code for a few days now, and I've found something odd: if I re-create all of my VBOs each frame, everything looks correct, even though I'm almost certain my vertices are not changing. Other notes I'm basing my work on this tutorial, and therefore am also using "IBOs" I create my buffers before rendering begins My buffers include vertex and texture data I'm using OpenGL ES 1.1 Fearing some strange effect of the current matrix GL state at the time of buffer creation I've also tried wrapping my buffer-setup code in a "pushMatrix-loadIdentity-popMatrix" block which (as expected) had no effect I'm aware that various articles have been published demonstrating that VBOs may not help performance, but I want to understand this problem and at least have the option to use them. I realize this is a shot in the dark, but has anyone else experienced this type of strange behavior? What might I be doing to result in this behavior? It's rather difficult for me to isolate the problem since I'm working in an existing, moderately complex project, so suggestions about how to approach the problem are also quite welcome.

    Read the article

  • Effortlessly resize images in Orchard 1.7

    - by Bertrand Le Roy
    I’ve written several times about image resizing in .NET, but never in the context of Orchard. With the imminent release of Orchard 1.7, it’s time to correct this. The new version comes with an extensible media pipeline that enables you to define complex image processing workflows that can automatically resize, change formats or apply watermarks. This is not the subject of this post however. What I want to show here is one of the underlying APIs that enable that feature, and that comes in the form of a new shape. Once you have enabled the media processing feature, a new ResizeMediaUrl shape becomes available from your views. All you have to do is feed it a virtual path and size (and, if you need to override defaults, a few other optional parameters), and it will do all the work for you of creating a unique URL for the resized image, and write that image to disk the first time the shape is rendered: <img src="@Display.ResizeMediaUrl(Path: img, Width: 59)"/> Notice how I only specified a maximum width. The height could of course be specified, but in this case will be automatically determined so that the aspect ratio is preserved. The second time the shape is rendered, the shape will notice that the resized file already exists on disk, and it will serve that directly, so caching is handled automatically and the image can be served almost as fast as the original static one, because it is also a static image. Only the URL generation and checking for the file existence takes time. Here is what the generated thumbnails look like on disk: In the case of those product images, the product page will download 12kB worth of images instead of 1.87MB. The full size images will only be downloaded as needed, if the user clicks on one of the thumbnails to get the full-scale. This is an extremely useful tool to use in your themes to easily render images of the exact right size and thus limit your bandwidth consumption. Mobile users will thank you for that.

    Read the article

  • Finding most Important Node(s) in a Directed Graph

    - by Srikar Appal
    I have a large (˜ 20 million nodes) directed Graph with in-edges & out-edges. I want to figure out which parts of of the graph deserve the most attention. Often most of the graph is boring, or at least it is already well understood. The way I am defining "attention" is by the concept of "connectedness" i.e. How can i find the most connected node(s) in the graph? In what follows, One can assume that nodes by themselves have no score, the edges have no weight & they are either connected or not. This website suggest some pretty complicated procedures like n-dimensional space, Eigen Vectors, graph centrality concepts, pageRank etc. Is this problem that complex? Can I not do a simple Breadth-First Traversal of the entire graph where at each node I figure out a way to find the number of in-edges. The node with most in-edges is the most important node in the graph. Am I missing something here?

    Read the article

  • Secure Coding Practices in .NET

    - by SoftwareSecurity
    Thanks to everyone who helped pack the room at the Fox Valley Day of .NET.   This presentation was designed to help developers understand why secure coding is important, what areas to focus on and additional resources.  You can find the slides here. Remember to understand what you are really trying to protect within your application.  This needs to be a conversation between the application owner, developer and architect.  Understand what data (or Asset) needs to be protected.  This could be passwords, credit cards, Social Security Numbers.   This also may be business specific information like business confidential data etc.  Performing a Risk and Privacy Assessment & Threat Model on your applications even in a small way can help you organize this process. These are the areas to pay attention to when coding: Authentication & Authorization Logging & Auditing Event Handling Session and State Management Encryption Links requested Slides Books The Security Development Lifecycle: SDL: A Process for Developing Demonstrably More Secure Software Threat Modeling Writing Secure Code The Web Application Hackers Handbook  Secure Programming with Static Analysis   Other Resources: OWASP OWASP Top 10 OWASP WebScarab OWASP WebGoat Internet Storm Center Web Application Security Consortium Events: OWASP AppSec 2011 in Minneapolis

    Read the article

  • Desktop application, dependency injection

    - by liori
    I am thinking of applying a real dependency injection library to my toy C#/GTK# desktop application. I chose NInject, but I think this is irrelevant to my question. There is a database object, a main window and several utility window classes. It's clear that I can inject the database into every window object, so here DI is useful. But does it make sense to inject utility window classes into other window classes? Example: I have classes such as: class MainWindow {…} class AddItemWindow {…} class AddAttachmentWindow {…} class BrowseItemsWindow {…} class QueryBuilderWindow {…} class QueryBrowserWindow {…} class PreferencesWindow {…} … Each of the utility classes can be opened from MainWindow. Some utility windows can also be opened from other utility windows. Generally, there might be a really complex graph of who can open whom. So each of those classes might need quite a lot of other window classes injected. I'm worried that such usage will go against the suggestion not to inject too many classes at once and become a code smell. Should I use some kind of a service locator object here?

    Read the article

  • Complexity of defense AI

    - by Fredrik Johansson
    I have a non-released game, and currently it's only possible to play with another human being. As the game rules are made up by me, I think it would be great if new players could learn basic game play by playing against an AI opponent. I mean it's not like Tennis, where the majority knows at least the fundamental rules. On the other hand, I'm a bit concerned that this AI implementation can be quite complex. I hope you can help me with an complexity estimation. I've tried to summarize the gameplay below. Is this defense AI very hard to do? Basic Defense Game Play Player Defender can move within his land, i.e. inside a random, non-convex, polygon. This land will also contain obstacles modeled as polygons, that Defender has to move around. Player Attacker has also a land, modeled as another such polygon. Assume that Defender shall defend against Attacker. Attacker will then throw a thingy towards Defender's land. To be rewarded, Attacker wants to hit Defender's land, and Defender will want to strike away the thingy from his land before it stops to prevent Attacker from scoring. To feint Defender, Attacker might run around within his land before the throw, and based on these attacker movements Defender shall then continuously move to the best defense position within his land.

    Read the article

  • Geekswithblogs.net Influencer Programm

    - by Staff of Geeks
    Recently, @StaffOfGeeks announced to a select group of bloggers, the Geekswithblogs.net Influencer Program.  Here is a little detail about the program. Description (from Influencer Page): Geekswithblogs.net is a community of bloggers passionate about contributing information to the world of developers and IT professionals. Our bloggers are some of the best in the world and receive honors on a regular basis for outside companies (such as the Microsoft© MVP Program). The Geekswithblogs.net Influencer Program is our way as Staff of Geeks to show our appreciation for those bloggers who have the greatest influence on the site. Each influencer in the program is awarded by the amount of posts, views, and comments they receive on their posts in the previous quarter. Each quarter, we select the top 25 bloggers of influence and give them special access to products and services we have obtained through our network of partners. Here is how it works.  Each quarter we select the top 25 bloggers bases on the amount of posts that created in that quarter, and apply points to the views and comments those posts receive.  The selection is purely off of the numbers, we do not select any based on any other basis.  In fact in the first round, several of our key bloggers did not qualify in the top 25.  Though they are still loved dearly, we wanted a program that anyone could be a part of if they put in the hard work. This said, the first quarter ends at the end of March and we will have another round of influencers joining us.  Keep the posts rolling and maybe you will be selected as an influencer! Visit the influencers page to see who has the greatest influence on Geekswithblogs right now.   Technorati Tags: Geekswithblogs

    Read the article

  • Prepared statement alternatives for this middle-man program?

    - by user2813274
    I have an program that is using a prepared statement to connect and write to a database working nicely, and now need to create a middle-man program to insert between this program and the database. This middle-man program will actually write to multiple databases and handle any errors and connection issues. I would like advice as to how to replicate the prepared statements such as to create minimal impact to the existing program, however I am not sure where to start. I have thought about creating a "SQL statement class" that mimics the prepared statement, only that seems silly. The existing program is in Java, although it's going to be networked anyways so I would be open to writing it in just about anything that would make sense. The databases are currently MySQL, although I would like to be open to changing the database type in the future. My main question is what should the interface for this program look like, and does doing this even make sense? A distributed DB would be the ideal solution, but they seem overly complex and expensive for my needs. I am hoping to replicate the main functionality of a distributed DB via this middle-man. I am not too familiar with sql-based servers distributing data (or database in general...) - perhaps I am fighting an uphill battle by trying to solve it via programming, but I would like to make an attempt at least.

    Read the article

  • Finding header files

    - by rwallace
    A C or C++ compiler looks for header files using a strict set of rules: relative to the directory of the including file (if "" was used), then along the specified and default include paths, fail if still not found. An ancillary tool such as a code analyzer (which I'm currently working on) has different requirements: it may for a number of reasons not have the benefit of the setup performed by a complex build process, and have to make the best of what it is given. In other words, it may find a header file not present in the include paths it knows, and have to take its best shot at finding the file itself. I'm currently thinking of using the following algorithm: Start in the directory of the including file. Is the header file found in the current directory or any subdirectory thereof? If so, done. If we are at the root directory, the file doesn't seem to be present on this machine, so skip it. Otherwise move to the parent of the current directory and go to step 2. Is this the best algorithm to use? In particular, does anyone know of any case where a different algorithm would work better?

    Read the article

  • Experiencing the New Social Enterprise

    - by kellsey.ruppel
    Social media and networking tools, popularly known as Web 2.0 technologies, are rapidly transforming user expectations of enterprise systems. Many organizations are investing in these new tools to cultivate a modern user experience in an “Enterprise 2.0” environment that unlocks the full potential of traditional IT systems and fosters collaboration in key business processes. Here are some key points and takeaways from some of the keynotes yesterday at the Enterprise 2.0 Conference: Social networks continue to forge complex connections between people, processes, and content, facilitating collaboration and the sharing of information The customer of today lives inside of Facebook, on your web, or has an app for that – and they have a question – and want an answer NOW Empowered employees are able to connect to colleagues, build relationships, develop expertise, self-select projects of interest to them, and expand skill sets well beyond their formal roles A fundamental promise of Enterprise 2.0 is that ideas will be generated and shared by everyone across the organization, leading to increased innovation, agility, and competitive advantage How well is your organizating delivering on these concepts? Are you able to successfully bring together people, processes and content? Are you providing the social tools your employees want and need? Are you experiencing the new social enterprise?

    Read the article

  • Is there an easy way to configure an Ubuntu system to function as a proxy/file server from behind an NAT?

    - by amol.kamath
    Sorry for the long question, but the situation/desire is quite complex. Here is my setup: I have a laptop which I carry around everywhere and I have a desktop sitting at home, connected to the internet through a router using NAT. My objective is to create a connection from my laptop to my desktop that can allow me to (in order of priority): Use the desktop as a proxy server Access files on the desktop remotely Control said desktop from the laptop using VNC or similar. Now here is the scene. I have already looked up and tried several ways to achieve the above goals. Teamviewer - I used it and didn't like it. This is not an option. SSH - This seems ideal, I have figured a way to use this for both proxy and file sharing. However, I am currently unable to connect it due to the NAT. I have a separate thread trying to get that to work here. VPN - I've figured out how to use this method for proxy, but not file sharing. However this faces the same problem as the above: I can't get it to connect through the NAT. Does anyone have any other solutions for what I want? Otherwise, if there are solutions to connecting through the NAT, please tell me (in the other thread). Thanks

    Read the article

  • what are some good interview questions for a position that consists of reviewing code for security vulnerabilities?

    - by John Smith
    The position is an entry-level position that consists of reading C++ code and identifying lines of code that are vulnerable to buffer overflows, out-of-bounds reads, uncontrolled format strings, and a bunch of other CWE's. We don't expect the average candidate to be knowledgeable in the area of software security nor do we expect him or her to be an expert computer programmer; we just expect them to be able to read the code and correctly identify vulnerabilities. I guess I could ask them the typical interview questions: reverse a string, print a list of prime numbers, etc, but I'm not sure that their ability to write code under pressure (or lack thereof) tells me anything about their ability to read code. Should I instead focus on testing their knowledge of C++? Ask them if they understand what a pointer is and how bitwise operators work? My only concern about asking that kind of question is that I might unfairly weed out people who don't happen to have the knowledge but have the ability to acquire it. After all, it's not like they will be writing a single line of code, and it's not like we are looking only for people who already know C++, since we are willing to train the right candidate. (It is true that I could ask those questions only to those candidates who claim to know C++, but I'd like to give the same "test" to everyone.) Should I just focus on trying to get an idea of their level of intelligence? In other words, should I get them to talk and pay attention to the way they articulate their thoughts, and so on?

    Read the article

  • Using Google App Engine to Perform World Updates vs an Authoritative Server

    - by Error 454
    I am considering different game server architectures that use GAE. The types of games I am considering are turn-based where the world status would need to be updated about once per minute. I am looking for an answer that persuades me to either perform the world update on the google servers OR an authoritative server that syncs with the datastore. The main goal here would be to minimize GAE daily quotas. For some rough numbers, I am assuming 10,000 entities requiring updates. Each entity update would require: Reading 5 private entity variables (fetched from datastore) Fetching as many as 20 static variables (from datastore or persisted in server memory) Writing 5 entity variables Clients of the game would authenticate and set state directly against GAE as well as pull the latest world state from GAE. Running the update on GAE would consist of a cron job launched every minute. This would update all of the entities and save the results to the datastore. This would be more CPU intensive for GAE. Running the update on an authoritative server would consist of fetching entity data from the GAE datastore, calculating the new entity states and pushing the new state variables back to the datastore. This would be more bandwidth intensive for the datastore.

    Read the article

  • Would it be possible to create an open source software library, entirely developed and moderated by an open community?

    - by Steven Jeuris
    Call it democratic software development, or open source on steroids if you will. I'm not just talking about the possibility of providing a patch which can be approved by the library owner. Think more along the lines of how Stack Exchange works. Anyone can post code, and through community moderation it is cleaned up and eventually valid code ends up in the final library. For complex libraries an elaborate system should probably be created, but for a simple library it is my belief this is already possible even within the Stack Exchange platform. Take a library of extension methods for .NET for example. Everybody goes their own way and implements their own subset of what they feel is important, open-source library or not. People want to share their code, but there is no suitable platform for it. extensionmethod.net is the result of answering this call for extension methods, but the framework hopelessly falls short; there is no order, or structure at all. You don't know whether an idea is any good until you try it, so I decided to create an Extension Methods proposal on Area51. I belief with proper moderation, it could be possible for the site to be more than a Q&A site, and that an actual library (or subsets of it) could be extracted from it. Has anything like this been attempted before? Are there platforms better suited for this?

    Read the article

< Previous Page | 224 225 226 227 228 229 230 231 232 233 234 235  | Next Page >