Search Results

Search found 5521 results on 221 pages for 'deeper understanding'.

Page 105/221 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • Big Data – Final Wrap and What Next – Day 21 of 21

    - by Pinal Dave
    In yesterday’s blog post we explored various resources related to learning Big Data and in this blog post we will wrap up this 21 day series on Big Data. I have been exploring various terms and technology related to Big Data this entire month. It was indeed fun to write about Big Data in 21 days but the subject of Big Data is much bigger and larger than someone can cover it in 21 days. My first goal was to write about the basics and I think we have got that one covered pretty well. During this 21 days I have received many questions and answers related to Big Data. I have covered a few of the questions in this series and a few more I will be covering in the next coming months. Now after understanding Big Data basics. I am personally going to do a list of the things next. I thought I will share the same with you as this will give you a good idea how to continue the journey of the Big Data. Build a schedule to read various Apache documentations Watch all Pluralsight Courses Explore HortonWorks Sandbox Start building presentation about Big Data – this is a great way to learn something new Present in User Groups Meetings on Big Data Topics Write more blog posts about Big Data I am going to continue learning about Big Data – I want you to continue learning Big Data. Please leave a comment how you are going to continue learning about Big Data. I will publish all the informative comments on this blog with due credit. I want to end this series with the infographic by UMUC. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Will Your Brand Survive the Age of Digital Darwinism?

    - by Christie Flanagan
    It’s the end of business as usual.  Trends such as the mobile web, social media, gamification and real-time are forcing businesses to rethink the way they operate.  At the same time, people are embracing a new digital culture across ever expanding networks.  Together, these trends have given rise to a new breed of connected consumer, one that is ready to shake the very foundation of business today.  This is the age of Digital Darwinism – where society and technology evolve faster than your ability to adapt.  How well poised is your brand to survive and thrive in this new environment? Attend this webcast to hear Altimeter Group digital analyst and futurist, Brian Solis, discuss the rise of connected consumerism and learn how brands can survive Digital Darwinism by better understanding customer expectations, disruptive technology, and the new opportunities that arise from them. You will learn: How brands are being redefined in the digital consumer landscape and what they can do to create and steer these experiences Why consumer influence is growing and how businesses can use this to their advantage How to connect with a rising audience through new touchpoints between consumers, brands, and influencers Why you need to create a culture of change to earn trust, influence and significance among today’s connected customers Register now.

    Read the article

  • A Technical Perspective On Rapid Planning

    - by Robert Story
    Upcoming WebcastTitle: A Technical Perspective On Rapid PlanningDate: April 14, 2010 Time: 11:00 am EDT, 9:00 am MDT, 8:00 am PDT, 16:00 GMT Product Family: Value Chain PlanningSummary Oracle's Strategic Network Optimization (SNO) product is a powerful supply chain design and tactical planning tool.  This one-hour session is recommended for functional users who want to gain a better understanding of how Oracle's SNO solution can help you solve complex supply chain issues, including supply chain design, risk management, logistics planning, sustainability planning, and a whole lot in between! Find out how SNO can be used to solve many different types of real-world business issues. Topics will include: Risk/Disaster Management Carbon Emissions Management Global Sourcing Labor/Workforce Planning Product Mix Optimization A short, live demonstration (only if applicable) and question and answer period will be included. Click here to register for this session....... ....... ....... ....... ....... ....... .......The above webcast is a service of the E-Business Suite Communities in My Oracle Support.For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • Where'd My Data Go? (and/or...How Do I Get Rid of It?)

    - by David Paquette
    Want to get a better idea of how cascade deletes work in Entity Framework Code First scenarios? Want to see it in action? Stick with us as we quickly demystify what happens when you tell your data context to nuke a parent entity. This post is authored by Calgary .NET User Group Leader David Paquette with help from Microsoft MVP in Asp.Net James Chambers. We got to spend a great week back in March at Prairie Dev Con West, chalk full of sessions, presentations, workshops, conversations and, of course, questions.  One of the questions that came up during my session: "How does Entity Framework Code First deal with cascading deletes?". James and I had different thoughts on what the default was, if it was different from SQL server, if it was the same as EF proper and if there was a way to override whatever the default was.  So we built a set of examples and figured out that the answer is simple: it depends.  (Download Samples) Consider the example of a hockey league. You have several different entities in the league including games, teams that play the games and players that make up the teams. Each team also has a mascot.  If you delete a team, we need a couple of things to happen: The team, games and mascot will be deleted, and The players for that team will remain in the league (and therefore the database) but they should no longer be assigned to a team. So, let's make this start to come together with a look at the default behaviour in SQL when using an EDMX-driven project. The Reference – Understanding EF's Behaviour with an EDMX/DB First Approach First up let’s take a look at the DB first approach.  In the database, we defined 4 tables: Teams, Players, Mascots, and Games.  We also defined 4 foreign keys as follows: Players.Team_Id (NULL) –> Teams.Id Mascots.Id (NOT NULL) –> Teams.Id (ON DELETE CASCADE) Games.HomeTeam_Id (NOT NULL) –> Teams.Id Games.AwayTeam_Id (NOT NULL) –> Teams.Id Note that by specifying ON DELETE CASCADE for the Mascots –> Teams foreign key, the database will automatically delete the team’s mascot when the team is deleted.  While we want the same behaviour for the Games –> Teams foreign keys, it is not possible to accomplish this using ON DELETE CASCADE in SQL Server.  Specifying a ON DELETE CASCADE on these foreign keys would cause a circular reference error: The series of cascading referential actions triggered by a single DELETE or UPDATE must form a tree that contains no circular references. No table can appear more than one time in the list of all cascading referential actions that result from the DELETE or UPDATE – MSDN When we create an entity data model from the above database, we get the following:   In order to get the Games to be deleted when the Team is deleted, we need to specify End1 OnDelete action of Cascade for the HomeGames and AwayGames associations.   Now, we have an Entity Data Model that accomplishes what we set out to do.  One caveat here is that Entity Framework will only properly handle the cascading delete when the the players and games for the team have been loaded into memory.  For a more detailed look at Cascade Delete in EF Database First, take a look at this blog post by Alex James.   Building The Same Sample with EF Code First Next, we're going to build up the model with the code first approach.  EF Code First is defined on the Ado.Net team blog as such: Code First allows you to define your model using C# or VB.Net classes, optionally additional configuration can be performed using attributes on your classes and properties or by using a Fluent API. Your model can be used to generate a database schema or to map to an existing database. Entity Framework Code First follows some conventions to determine when to cascade delete on a relationship.  More details can be found on MSDN: If a foreign key on the dependent entity is not nullable, then Code First sets cascade delete on the relationship. If a foreign key on the dependent entity is nullable, Code First does not set cascade delete on the relationship, and when the principal is deleted the foreign key will be set to null. The multiplicity and cascade delete behavior detected by convention can be overridden by using the fluent API. For more information, see Configuring Relationships with Fluent API (Code First). Our DbContext consists of 4 DbSets: public DbSet<Team> Teams { get; set; } public DbSet<Player> Players { get; set; } public DbSet<Mascot> Mascots { get; set; } public DbSet<Game> Games { get; set; } When we set the Mascot –> Team relationship to required, Entity Framework will automatically delete the Mascot when the Team is deleted.  This can be done either using the [Required] data annotation attribute, or by overriding the OnModelCreating method of your DbContext and using the fluent API. Data Annotations: public class Mascot { public int Id { get; set; } public string Name { get; set; } [Required] public virtual Team Team { get; set; } } Fluent API: protected override void OnModelCreating(DbModelBuilder modelBuilder) { modelBuilder.Entity<Mascot>().HasRequired(m => m.Team); } The Player –> Team relationship is automatically handled by the Code First conventions. When a Team is deleted, the Team property for all the players on that team will be set to null.  No additional configuration is required, however all the Player entities must be loaded into memory for the cascading to work properly. The Game –> Team relationship causes some grief in our Code First example.  If we try setting the HomeTeam and AwayTeam relationships to required, Entity Framework will attempt to set On Cascade Delete for the HomeTeam and AwayTeam foreign keys when creating the database tables.  As we saw in the database first example, this causes a circular reference error and throws the following SqlException: Introducing FOREIGN KEY constraint 'FK_Games_Teams_AwayTeam_Id' on table 'Games' may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints. Could not create constraint. To solve this problem, we need to disable the default cascade delete behaviour using the fluent API: protected override void OnModelCreating(DbModelBuilder modelBuilder) { modelBuilder.Entity<Mascot>().HasRequired(m => m.Team); modelBuilder.Entity<Team>() .HasMany(t => t.HomeGames) .WithRequired(g => g.HomeTeam) .WillCascadeOnDelete(false); modelBuilder.Entity<Team>() .HasMany(t => t.AwayGames) .WithRequired(g => g.AwayTeam) .WillCascadeOnDelete(false); base.OnModelCreating(modelBuilder); } Unfortunately, this means we need to manually manage the cascade delete behaviour.  When a Team is deleted, we need to manually delete all the home and away Games for that Team. foreach (Game awayGame in jets.AwayGames.ToArray()) { entities.Games.Remove(awayGame); } foreach (Game homeGame in homeGames) { entities.Games.Remove(homeGame); } entities.Teams.Remove(jets); entities.SaveChanges();   Overriding the Defaults – When and How To As you have seen, the default behaviour of Entity Framework Code First can be overridden using the fluent API.  This can be done by overriding the OnModelCreating method of your DbContext, or by creating separate model override files for each entity.  More information is available on MSDN.   Going Further These were simple examples but they helped us illustrate a couple of points. First of all, we were able to demonstrate the default behaviour of Entity Framework when dealing with cascading deletes, specifically how entity relationships affect the outcome. Secondly, we showed you how to modify the code and control the behaviour to get the outcome you're looking for. Finally, we showed you how easy it is to explore this kind of thing, and we're hoping that you get a chance to experiment even further. For example, did you know that: Entity Framework Code First also works seamlessly with SQL Azure (MSDN) Database creation defaults can be overridden using a variety of IDatabaseInitializers  (Understanding Database Initializers) You can use Code Based migrations to manage database upgrades as your model continues to evolve (MSDN) Next Steps There's no time like the present to start the learning, so here's what you need to do: Get up-to-date in Visual Studio 2010 (VS2010 | SP1) or Visual Studio 2012 (VS2012) Build yourself a project to try these concepts out (or download the sample project) Get into the community and ask questions! There are a ton of great resources out there and community members willing to help you out (like these two guys!). Good luck! About the Authors David Paquette works as a lead developer at P2 Energy Solutions in Calgary, Alberta where he builds commercial software products for the energy industry.  Outside of work, David enjoys outdoor camping, fishing, and skiing. David is also active in the software community giving presentations both locally and at conferences. David also serves as the President of Calgary .Net User Group. James Chambers crafts software awesomeness with an incredible team at LogiSense Corp, based in Cambridge, Ontario. A husband, father and humanitarian, he is currently residing in the province of Manitoba where he resists the urge to cheer for the Jets and maintains he allegiance to the Calgary Flames. When he's not active with the family, outdoors or volunteering, you can find James speaking at conferences and user groups across the country about web development and related technologies.

    Read the article

  • Writing a SQL Azure Book - Notes

    - by Herve Roggero
    Over the last few months I have had the opportunity to ramp up significantly on SQL Azure.  In fact I will be the co-author of Pro SQL Azure, published by Apress. This is going to be a book on how to best leverage SQL Azure, both from a technology and design standpoint. Talking about design, one of the things I realized is that understanding the key limitations and boundary parameters of Azure in general, and more specifically SQL Azure, will play an important role in making sounds design decisions that both meet reasonable performance requirements and minimize the costs associated with running a cloud computing solution.   The book touches on many design considerations including link encryption, pricing model, design patterns, and also some important performance techniques that need to be leveraged when developing in Azure, including Caching, Lazy Properties and more.   Finally I started working with Shards and how to implement them in Azure to ensure database scalability beyond the current size limitations. Implementing shards is not simple, and the book will address how to create a shard technology within your code to provide a scale-out mechanism for your SQL Azure databases.   As you can see, there are many factors to consider when designing a SQL Azure database. While we can think of SQL Azure as a cloud version of SQL Server, it is best to look at it as a new platform to make sure you don’t make any assumptions on how to best leverage it.

    Read the article

  • Round-up: Embedded Java posts and videos

    - by terrencebarr
    I’ve been collecting links to some interesting blog posts and videos related to embedded Java over the last couple of weeks. Passing  these on here: Freescale blog – The Embedded Beat: “Let’s make it real – Internet of Things” Simon Ritter’s blog: “Mind Reading with Raspberry Pi” NightHacking with Steve Chin and Terrence Barr: “Java in the Internet of Things” NightHacking with Steve Chin and Alderan Robotics: “The NAO Robot” Java Magazine: “Getting Started with Java SE for embedded devices on Raspberry Pi” OTN video interview: “Java at ARM TechCon” OPN Techtalk with MX Entertainment: “Using Java and MX’s GrinXML Framework to build Blu-ray Disc and media applications” Oracle PartnerNetwork Blog: “M2M Architecture: Machine to Machine – The Internet of Things – It’s all about the Data” YouTube Java Channel: “Understanding the JVM and Low Latency Applications” Cheers, – Terrence Filed under: Mobile & Embedded Tagged: blog, iot, Java, Java Embedded, Raspberry Pi, video

    Read the article

  • Round-up: Embedded Java posts and videos

    - by terrencebarr
    I’ve been collecting links to some interesting blog posts and videos related to embedded Java over the last couple of weeks. Passing  these on here: Freescale blog – The Embedded Beat: “Let’s make it real – Internet of Things” Simon Ritter’s blog: “Mind Reading with Raspberry Pi” NightHacking with Steve Chin and Terrence Barr: “Java in the Internet of Things” NightHacking with Steve Chin and Alderan Robotics: “The NAO Robot” Java Magazine: “Getting Started with Java SE for embedded devices on Raspberry Pi” OTN video interview: “Java at ARM TechCon” OPN Techtalk with MX Entertainment: “Using Java and MX’s GrinXML Framework to build Blu-ray Disc and media applications” Oracle PartnerNetwork Blog: “M2M Architecture: Machine to Machine – The Internet of Things – It’s all about the Data” YouTube Java Channel: “Understanding the JVM and Low Latency Applications” Cheers, – Terrence Filed under: Mobile & Embedded Tagged: blog, iot, Java, Java Embedded, Raspberry Pi, video

    Read the article

  • Does it matter the direction of a Huffman's tree child node?

    - by Omega
    So, I'm on my quest about creating a Java implementation of Huffman's algorithm for compressing/decompressing files (as you might know, ever since Why create a Huffman tree per character instead of a Node?) for a school assignment. I now have a better understanding of how is this thing supposed to work. Wikipedia has a great-looking algorithm here that seemed to make my life way easier. Taken from http://en.wikipedia.org/wiki/Huffman_coding: Create a leaf node for each symbol and add it to the priority queue. While there is more than one node in the queue: Remove the two nodes of highest priority (lowest probability) from the queue Create a new internal node with these two nodes as children and with probability equal to the sum of the two nodes' probabilities. Add the new node to the queue. The remaining node is the root node and the tree is complete. It looks simple and great. However, it left me wondering: when I "merge" two nodes (make them children of a new internal node), does it even matter what direction (left or right) will each node be afterwards? I still don't fully understand Huffman coding, and I'm not very sure if there is a criteria used to tell whether a node should go to the right or to the left. I assumed that, perhaps the highest-frequency node would go to the right, but I've seen some Huffman trees in the web that don't seem to follow such criteria. For instance, Wikipedia's example image http://upload.wikimedia.org/wikipedia/commons/thumb/8/82/Huffman_tree_2.svg/625px-Huffman_tree_2.svg.png seems to put the highest ones to the right. But other images like this one http://thalia.spec.gmu.edu/~pparis/classes/notes_101/img25.gif has them all to the left. However, they're never mixed up in the same image (some to the right and others to the left). So, does it matter? Why?

    Read the article

  • The Social Business Thought Leaders - Esteban Kolsky

    - by kellsey.ruppel
    Esteban Kolsky's presentation at the Social Business Forum 2012 was meaningfully titled “Everything you wanted to know about Customer Service using Social but had no one to ask”.  A recent survey by ThinkJar, Kolsky’s independent analyst firm, reported how more than 90% of the interviewed companies consider embracing social channels in customer service the right thing to do for the business and its customers. These numbers shouldn't be too surprising given the popularity of services such as Twitter and Facebook (59% and 60% respectively in the survey) among organizations, the power consumers are gaining online and the 40% preference they have to escalate issues on social services. Moreover, both large enterprises and small businesses are realizing how customer retention is cheaper and easier than customer acquisition. Many companies are looking at communities and social networks as an opportunity to drive loyalty, satisfaction and word of mouth. However, in this early phase the way they are preparing to launch social support appears to be lacking at best: 66% have no defined processes for customer service over social channels 68% were not able to estimate ROI before deploying social in customer service Only 8% found the expected ROI Most of the projects are stuck in the pilot or testing phase In his interview for the Social Business Thought-Leaders, Esteban discusses how to turn social media hype in business gains by touching upon some of the hottest topics organizations face when approaching social support: How to go from social media monitoring to actionable insights How Social CRM should be best positioned in regard to traditional CRM The importance of integrating social data to transactional data  Conversations with customer service organizations points to 2012 as the year of "understanding what social means for supporting customers". Will 2013 be the year it all becomes reality? We invite you to listen to Esteban Kolsky's interview to understand how to most effectively develop cross-channel strategies that include social channels and improve both customer satisfaction and the overall customer experience.

    Read the article

  • The Workshop Technique Handbook

    - by llowitz
    The #OUM method pack contains a plethora of information, but if you browse through the activities and tasks contained in OUM, you will see very few references to workshops.  Consequently, I am often asked whether OUM supports a workshop-type approach.  In general, OUM does not prescribe the manner in which tasks should be conducted, as many factors such as culture, availability of resources, potential travel cost of attendees, can influence whether a workshop is appropriate in a given situation.  Although workshops are not typically called out in OUM, OUM encourages the project manager to group the OUM tasks in a way that makes sense for the project. OUM considers a workshop to be a technique that can be applied to any OUM task or group of tasks.  If a workshop is conducted, it is important to identify the OUM tasks that are executed during the workshop.  For example, a “Requirements Gathering Workshop” is quite likely to Gathering Business Requirements, Gathering Solution Requirements and perhaps Specifying Key Structure Definitions. Not only is a workshop approach to conducting the OUM tasks perfectly acceptable, OUM provides in-depth guidance on how to maximize the value of your workshops.  I strongly encourage you to read the “Workshop Techniques Handbook” included in the OUM Manage Focus Area under Method Resources. The Workshop Techniques Handbook provides valuable information on a variety of workshop approaches and discusses the circumstances in which each type of workshop is most affective.  Furthermore, it provides detailed information on how to structure a workshops and tips on facilitating the workshop. You will find guidance on some popular workshop techniques such as brainstorming, setting objectives, prioritizing and other more specialized techniques such as Value Chain Analysis, SWOT analysis, the Delphi Technique and much more. Workshops can and should be applied to any type of OUM project, whether that project falls within the Envision, Manage or Implementation focus areas.  If you typically employ workshops to gather information, walk through a business process, develop a roadmap or validate your understanding with the client, by all means continue utilizing them to conduct the OUM tasks during your project, but first take the time to review the Workshop Techniques Handbook to refresh your knowledge and hone your skills. 

    Read the article

  • How to implement a birds eye view of 2D Grid Map using Android

    - by IM_Adan
    I'm a true beginner with using the android platform and I'm having difficulties on implementing a 2D grid system for a tower defense type game. Where I can place towers on a specific tile and enemies will be able to traverse through tiles etc. What I would like is a practical explanation of how I could tackle this. A step by step guide for dummies. This is what I believe are the necessary steps to take, I think I might be wrong but I hope someone could help me out. Calculate the Width and Height of the view I'm working with. Based on that, determine the number of tiles required and their dimensions, (Still not sure how I would do this) Create each tile as a Rectangle object and draw these rectangle on a canvas I would really be grateful if someone could steer me in the right direction on how to implement a 2D Grid Map using android. I hope the answer to this questions helps the TRUE beginners out there like me. I have looked at the following links below yet I still feel that I don't trully understand what's going on. For XNA: 2D Grid based game - how should I draw grid lines? How to Create a Grid for a 2D Game? Also a quick note: All my previous game development has been in Java, mostly using Java SE and Swing. I also have good understanding of the game development process, it is only android thats confusing me :S

    Read the article

  • C IDE for Mac needed

    - by StasM
    I'm looking for heavy-duty C/C++ IDE for Mac that would satisfy the following criteria: Work with big projects (~5000 files, some of them 100K big) efficiently. Have good navigation both file-based and symbol-based - i.e. "go to file", "go to function" etc. with autocompletion support. Support for "go to declaration/definition" for symbols - functions, structures, etc. Auto-adding new files in folders already in the project. Support for code completion for values, function names, etc. At least rudimentary CPP macro understanding - i.e. #define foo bar then foo() should take me either to #define or to actual bar. I understand full CPP parsing may be hard, but I hope for at least the obvious cases. Support for displaying parameter names/types by function name, preferably - integrated with the previous item, for functions defined in the project. Support for libc would be nice too :) (optional) Cross-project search support (I can manage with grep -r if everything else works) (optional) SVN support, at least to some extent (update, commit, mark updated) Is there such editor around? Free would be nice, but I'm ready to part with some money if it's good enough. I'm using TextMate now but I'm not satisfied with it. Tried Xcode but it seems to not be able to handle a large project - it just crashed...

    Read the article

  • Using USB to Ethernet with Linux Ubuntu 12.04

    - by Sriram
    Being a newbie, Please excuse if the technical Jargon used is not an universally accepted one :) I have a particular device (say device A) whose USB2.0 driver is available from Linux community. Linux UBUNTU12.04 based PC is able to detect that device via the available driver. My requirement is to ensure that PC can exchange the command as well as data with the device A over TCP/IP packets (In other words, instead of just a USB Based driver, there should be a TCP/IP wrapper over the device USB driver and still does the same job as the USB driver was doing before) Bought an USB (Female) to RJ-45 adapter,connected Device A (male) USB to the USB Female end of the adapter and the Ethernet end connected to the router. PC also is connected to the same router so that both Device A and the PC have the IP address in the same subnet range. So the packets produced by the device A can be routed to the PC via some binding( not sure how I can achieve this, but conceptual idea) Here are the issues I can see as of now 1) USB to RJ-45 is just a hardware signal conversion and not a NIC in itself and hence no MAC/IP ADDRESS assigned. Can we bind a virtual NIC created in PC with this connector? 2) Any available USB TO IP command as well as data translation wrappers available? e.g. command for the device A on Ethernet converted to command for the device A on USB which is then acted upon the device as a command from the USB driver There is some missing link in my understanding and hence it would be of great help if you can bounce off some ideas on how I can take this forward so that Device A and PC exchange data over IP. Thanks and Regards, Sriram

    Read the article

  • How do I properly set up and secure a production LAMP Server?

    - by Niklas
    It's very hard to find comprehensive information on this subject. Either I found short tutorials on how you perform the installation, as simple as "apt-get install apache2", or outdated tutorials. So I was hoping I could get some professional information from my fellow members of the Ubuntu community :D I have performed a normal Ubuntu Server 11.04 with LAMP, SAMBA and SSH installed through the system installation. But I'm having some trouble setting up virtual hosts and to make the system secure enough to expose the server to the web. I've somewhat followed this tutorial this far. I have 3 sites in /etc/apache2/sites-available which all looks like this except for different site names: <VirtualHost example.com> ServerAdmin webmaster@localhost ServerAlias www.edunder.se DocumentRoot /var/www/sites/example CustomLog /var/log/apache2/www.example.com-access.log combined </VirtualHost> And I have enabled them with the command a2ensite so I have symbolic links in /etc/apache2/sites-enabled. My /etc/hosts file has these lines: 127.0.0.1 localhost 127.0.1.1 Ubuntu.lan Ubuntu 127.0.0.1 localhost.localdomain localhost example.com www.example.com 127.0.0.1 localhost.localdomain localhost example2.com www.example2.com 127.0.0.1 localhost.localdomain localhost example3.com www.example3.com And I can only access one of them from the browser (I have lynx installed on the server for testing purposes) so I guess I haven't set them up properly :) How should I proceed to get a secure and proper setup? I also use MySQL and I think that this tutorial will be enough to set up SSH securely. Please help me understanding Apache configuration better since I'm new to setting up my own server (I've only run XAMPP earlier) and please advise regarding how I should setup a firewall as well :D

    Read the article

  • need explanation on amortization in algorithm

    - by Pradeep
    I am a learning algorithm analysis and came across a analysis tool for understanding the running time of an algorithm with widely varying performance which is called as amortization. The autor quotes " An array with upper bound of n elements, with a fixed bound N, on it size. Operation clear takes O(n) time, since we should dereference all the elements in the array in order to really empty it. " The above statement is clear and valid. Now consider the next content: "Now consider a series of n operations on an initially empty array. if we take the worst case viewpoint, the running time is O(n^2), since the worst case of a sigle clear operation in the series is O(n) and there may be as many as O(n) clear operations in the series." From the above statement how is the time complexity O(n^2)? I did not understand the logic behind it. if 'n' operations are performed how is it O(n ^2)? Please explain what the autor is trying to convey..

    Read the article

  • How does flocking algorithm work?

    - by Chan
    I read and understand the basic of flocking algorithm. Basically, we need to have 3 behaviors: 1. Cohesion 2. Separation 3. Alignment From my understanding, it's like a state machine. Every time we do an update (then draw), we check all the constraints on both three behaviors. And each behavior returns a Vector3 which is the "correct" orientation that an object should transform to. So my initial idea was /// <summary> /// Objects stick together /// </summary> /// <returns></returns> private Vector3 Cohesion() { Vector3 result = new Vector3(0.0f, 0.0f, 0.0f); return result; } /// <summary> /// Object align /// </summary> /// <returns></returns> private Vector3 Align() { Vector3 result = new Vector3(0.0f, 0.0f, 0.0f); return result; } /// <summary> /// Object separates from each others /// </summary> /// <returns></returns> private Vector3 Separate() { Vector3 result = new Vector3(0.0f, 0.0f, 0.0f); return result; } Then I search online for pseudocode but many of them involve velocity and acceleration plus other stuffs. This part confused me. In my game, all objects move at constant speed, and they have one leader. So can anyone share me an idea how to start on implement this flocking algorithm? Also, did I understand it correctly? (I'm using XNA 4.0)

    Read the article

  • How Facebook's Ad Bid System Works

    - by pnongrata
    When you are creating an ad on Facebook, you are provided with a "suggested bid" range (e.g., $0.90 - $2.15 USD). According to this page: The suggested bid range is there to help you pick a maximum bid so your ad will be successful. It’s based on how many other advertisers are competing to show their ad to the same audience as you are. I'm interested in understanding what's actually going on (technically) under the hood here. Say a user logs into Facebook. On the server-side, it the HTTP request that the user's browser sent (as part of the login) is handled, and the server needs to figure out which ad to display back to the user. I assume this is where the "bidding" system comes into play? Say that, based on this user's demographics, and based on the audience targeting that several competing advertisers designed their campaign with, let's pretend that Facebook sees a pool of 20 different ads it could return. How does this bidding system help Facebook determine which of the 20 ads it returns to the client-side? I'm guessing that advertisers who "bid more" get prioritized over those who "bid less". But when does this bidding take place? How often does an advertiser need to re-bid? How long is a bid binding for? Once I understand these usage-related concepts behind ads, it will probably be obvious between which of the following "selection strategies" the backend is using: Round robin Prioritized round robin Randomized (doubtful) History-based MVP-based Thanks to anyone who can help point me in the right direction and explain what these suggested bid systems are and how they work.

    Read the article

  • Are More Comments Better in High-Turnover Environments?

    - by joshin4colours
    I was talking with a colleague today. We work on code for two different projects. In my case, I'm the only person working on my code; in her case, multiple people work on the same codebase, including co-op students who come and go fairly regularly (between every 8-12 months). She said that she is liberal with her comments, putting them all over the place. Her reasoning is that it helps her remember where things are and what things do since much of the code wasn't written by her and could be changed by someone other than her. Meanwhile, I try to minimize the comments in my code, putting them in only in places with a unobvious workaround or bug. However, I have a better understanding of my code overall, and have more direct control over it. My opinion in that comments should be minimal and the code should tell most of the story, but her reasoning makes sense too. Are there any flaws in her reasoning? It may clutter the code but it ultimately could be quite helpful if there are many people working on it in the short- to medium-run.

    Read the article

  • Finding the right back-end developer

    - by John Watson
    I am creating a websites for mobile phone tests. Users can post their own tests and combine it with an existing rating of each product. I do only front-end development and I have no idea about back-end - php, sql, etc. I am not sure I should operate the website without this knowledge but my first thought is to get a professional whom I would give my website to, so that he can do the rest. Only thing is that I need to update it regularly and post my own tests. I don't know how that works and how I should approach this. My understanding is that, after I have finished the website (written in HTML, CSS, JavaScript/jQuery), I would go and find a php programmer and tell me to put it online, make it secure, make sure that the open-source facility (users post their own tests) and that it runs smoothly with the host/server I've chosen. Could you tell me if my approach makes sense (is that the way how to do it)? What should I consider when searching the right back-end developer concerning the right price performance, trust, etc. ?

    Read the article

  • Online betting system design [closed]

    - by Rafal
    I am a Computer Science student, preparing for my exam in software engineering. I am strugging with answering one of the sample questions to the scenario below. My understanding is that the system design approach should probably be a mixture of agile and plan driven elements but - since I've no practical experience - it's hard for me to decide on the balance and tolls that should be used. I will appreciate any hints from experienced business analysts who were involved in similar kind of projects. Ray Sing is the owner of “Last Betz", a bookmakers with 7 outlets across Louth and Meath. With the advent of smartphones Ray would now like to allow his clients to place their bets online using their mobile devices. Clients would register for an account and password and would log their credit card details via the Last Betz website. To begin using the facility customers must 'load' their accounts with 100 euros. Any winnings, minus commission, will be placed in the account whilst any losses will be automatically deducted from the account. Assuming you have been selected to develop the above system: How would you approach the design of this system? Discuss the design methods and models you would use.

    Read the article

  • HTML5 Development for Dummies

    - by Geertjan
    What's HTML5 all about and what does it actually mean, concretely, to develop HTML5 applications? NetBeans IDE 7.3 provides something called "Project Easel", which is a bundling of HTML5-related tools into a coherent toolset. Within a matter of hours, you'll know everything you need to know about what all this is about if you follow the steps below.  Get A Solid Overview. Start by viewing this screencast from JavaOne 2012 (click the media link on the right side once you've clicked the link below, a downloadable MP4 file is also available there):https://oracleus.activeevents.com/connect/sessionDetail.ww?SESSION_ID=4038That is an awesome way to get you in the right mindframe for what HTML5 is and how it fits into the programming world, together with a very cool and entertaining demo, presented by JB Brock. He starts with about three slides and then does a super awesome demo that puts you into the picture very quickly. Understand How HTML5 Relates To Java EE. Now here's a very cool follow up to the above, again demo-driven (click the media links on the right side once you've clicked the link below):https://oracleus.activeevents.com/connect/sessionDetail.ww?SESSION_ID=4737David Konecny takes the Affable Bean project created via the NetBeans E-commerce Tutorial and creates an HTML5 front end for it! I.e., you are shown how HTML5 can provide a different front end, as an alternative to JSF. Why would you do that? Well, that's explained in David's session, as well as in JB Brock's session, i.e., choose the right technology for the right situation. Sometimes HTML5 might make sense, other times JSF might make sense. Follow The NetBeans Screencasts. To revise and firm up everything you've learned from the above two JavaOne sessions, watch two screencasts by Ken Ganfield, part 1, Getting Started with HTML5 and part 2, Working with JavaScript in HTML5 Applications. In particular, you'll learn how NetBeans IDE provides tools to thoroughly cover the needs of HTML5 developers. Having taken the above three steps, you now have a thorough background, together with an understanding of the tools and procedures needed for creating your own HTML5 applications.

    Read the article

  • TotalPhase Aardvark driver's GPL license

    - by Philip
    I'm using an SPI host adapter for a project. The Aardvark from TotalPhase. And I did something crazy, I read that EULA license that everyone just clicks through. The driver installation license includes these bits: This driver installer package also includes a WIN32 driver that is entirely based on the libusb-win32 project (release 0.1.10.1). ... LICENSE: The software in this package is distributed under the following licenses: Driver: GNU General Public License (GPL) Library, Test Files: GNU Lesser General Public License (LGPL) Now, my understanding of of the GPL is that it's sticky and viral. If you include software then the whole project has to be released under the GPL (if you distribute it, you can do whatever you want with in-house projects). If the driver was like the library, and was licensed under the LGPL, it could be used by my closed source proprietary project, as long as it's source and license was passed along with it. But it's not, it's pure GPL. If I include this driver in my project and distribute it, am I required to release my project under the GPL?

    Read the article

  • So, whats the best book on C#?

    - by mbcrump
    I see this question several times a day from newbie’s to professionals. I have listed the best C# books that I have read so far.   ECMA-334 C# Language Specification. – FREE book. This is probably the best place to start. Read it backwards and forwards and you can even request a hard copy. Absolute Beginners Guide to C Sharp 2nd Edition – Used this early on and found it very useful even if its game programming. C-Sharp 2.0 - The Complete Reference, 2nd Edition (McGraw-Hill, 2006) – One of the most useful books that is always with me. It contains short example code and is very well written. Dot Net Zero - Charles Petzold  - FREE book and you should definately give it a read. C Sharp in Depth by Jon Skeet -  Probably one of the most in depth books on C Sharp and definitely not for beginners. Jon Skeet knows C# like no other. I would consider this book the Bible of C#. If you understand 50% of this book, you have a good understanding of the language.  CLR via C Sharp 3rd Edition – I just started reading this book and it is another book thats not for beginners. If you really want to understand the CLR then give this book a try. Well, thats it. I hope you enjoy the books as I have spent a lot of time researching different C# books.

    Read the article

  • Design for XML mapping scenarios between two different systems [on hold]

    - by deepak_prn
    Mapping XML fields between two systems is a mundane routine in integration scenarios. I am trying to make the design documents look better and provide clear understanding to the developers especially when we do not use XSLT or any other IDE such as jDeveloper or eclipse plugins. I want it to be a high level design but at the same time talk in developer's language. So that there is no requirements that slip under the crack. For example, one of the scenarios goes: the store cashier sells an item, the transaction data is sent to Data management system. Now, I am writing a functional design for the scenario which deals with mapping XML fields between our system and the data management system. Question : I was wondering if some one had to deal with mapping XML fields between two systems? (without XSLT being involved) and if you used a table to represent the fields mapping (example is below) or any other visualization tool which does not break the bank ? I am trying to find out if there is a better way to represent XML mapping in your design documents. The widely accepted and used method seems to be using a simple table such as in the picture to illustrate the mapping. I am wondering if there are alternate ways/ tools to represent such as in Altova:

    Read the article

  • DAO/Webservice Consumption in Web Application

    - by Gavin
    I am currently working on converting a "legacy" web-based (Coldfusion) application from single data source (MSSQL database) to multi-tier OOP. In my current system there is a read/write database with all the usual stuff and additional "read-only" databases that are exported daily/hourly from an Enterprise Resource Planning (ERP) system by SSIS jobs with business product/item and manufacturing/SCM planning data. The reason I have the opportunity and need to convert to multi-tier OOP is a newer more modern ERP system is being implemented business wide that will be a complete replacement. This newer ERP system offers several interfaces for third party applications like mine, from direct SQL access to either a dotNet web-service or a SOAP-like web-service. I have found several suitable frameworks I would be happy to use (Coldspring, FW/1) but I am not sure what design patterns apply to my data access object/component and how to manage the connection/session tokens, with this background, my question has the following three parts: Firstly I have concerns with moving from the relative safety of a SSIS job that protects me from downtime and speed of the ERP system to directly connecting with one of the web services which I note seem significantly slower than I expected (simple/small requests often take up to a whole second). Are there any design patterns I can investigate/use to cache/protect my data tier? It is my understanding data access objects (the component that connects directly with the web services and convert them into the data types I can then work with in my Domain Objects) should be singletons (and will act as an Adapter/Facade), am I correct? As part of the data access object I have to setup a connection by username/password (I could set up multiple users and/or connect multiple times with this) which responds with a session token that needs to be provided on every subsequent request. Do I do this once and share it across the whole application, do I setup a new "connection" for every user of my application and keep the token in their session scope (might quickly hit licensing limits), do I set the "connection" up per page request, or is there a design pattern I am missing that can manage multiple "connections" where a requests/access uses the first free "connection"? It is worth noting if the ERP system dies I will need to reset/invalidate all the connections and start from scratch, and depending on which web-service I use might need manually close the "connection/session"

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >