Search Results

Search found 8401 results on 337 pages for 'bad habits'.

Page 171/337 | < Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >

  • Recovering a lost website with no backup?

    - by Jeff Atwood
    Unfortunately, our hosting provider experienced 100% data loss, so I've lost all content for two hosted blog websites: http://blog.stackoverflow.com http://www.codinghorror.com (Yes, yes, I absolutely should have done complete offsite backups. Unfortunately, all my backups were on the server itself. So save the lecture; you're 100% absolutely right, but that doesn't help me at the moment. Let's stay focused on the question here!) I am beginning the slow, painful process of recovering the website from web crawler caches. There are a few automated tools for recovering a website from internet web spider (Yahoo, Bing, Google, etc.) caches, like Warrick, but I had some bad results using this: My IP address was quickly banned from Google for using it I get lots of 500 and 503 errors and "waiting 5 minutes…" Ultimately, I can recover the text content faster by hand I've had much better luck by using a list of all blog posts, clicking through to the Google cache and saving each individual file as HTML. While there are a lot of blog posts, there aren't that many, and I figure I deserve some self-flagellation for not having a better backup strategy. Anyway, the important thing is that I've had good luck getting the blog post text this way, and I am definitely able to get the text of the web pages out of the Internet caches. Based on what I've done so far, I am confident I can recover all the lost blog post text and comments. However, the images that go with each blog post are proving…more difficult. Any general tips for recovering website pages from Internet caches, and in particular, places to recover archived images from website pages? (And, again, please, no backup lectures. You're totally, completely, utterly right! But being right isn't solving my immediate problem… Unless you have a time machine…)

    Read the article

  • Updating physics for animated models

    - by Mathias Hölzl
    For a new game we have do set up a scene with a minimum of 30 bone animated models.(shooter) The problem is that the update process for the animated models takes too long. Thats what I do: Each character has ~30 bones and for every update tick the animation gets calculated and every bone fires a event with the new matrix. The physics receives the event with the new matrix and updates the collision shape for that bone. The time that it takes to build the animation isn't that bad (0.2ms for 30 Bones - 6ms for 30 models). But the main problem is that the physic engine (Bullet) uses a diffrent matrix for transformation and so its necessary to convert it. Code for matrix conversion: (~0.005ms) btTransform CLEAR_PHYSICS_API Mat_to_btTransform( Mat mat ) { btMatrix3x3 bulletRotation; btVector3 bulletPosition; XMFLOAT4X4 matData = mat.GetStorage(); // copy rotation matrix for ( int row=0; row<3; ++row ) for ( int column=0; column<3; ++column ) bulletRotation[row][column] = matData.m[column][row]; for ( int column=0; column<3; ++column ) bulletPosition[column] = matData.m[3][column]; return btTransform( bulletRotation, bulletPosition ); } The function for updating the transform(Physic): void CLEAR_PHYSICS_API BulletPhysics::VKinematicMove(Mat mat, ActorId aid) { if ( btRigidBody * const body = FindActorBody( aid ) ) { btTransform tmp = Mat_to_btTransform( mat ); body->setWorldTransform( tmp ); } } The real problem is the function FindActorBody(id): ActorIDToBulletActorMap::const_iterator found = m_actorBodies.find( id ); if ( found != m_actorBodies.end() ) return found->second; All physic actors are stored in m_actorBodies and thats why the updating process takes to long. But I have no idea how I could avoid this. Friendly greedings, Mathias

    Read the article

  • What the Hekaton?

    - by Tony Davis
    Hekaton, the power behind SQL Server 2014′s In-Memory OLTP technology, is intended to make data operations run orders of magnitude faster on SQL Server. This works its magic partly by serving database workloads entirely from main memory, using memory-optimized table structures. It replaces the relational engine’s standard locking model with an optimistic concurrency model based on time-stamped row versions. Deeper down the Hekaton engine uses new, ‘latch free’ data structures. So far, so good, but performance improvements on this scale require a compromise, and the compromise is that these aren’t tables as we understand them. For the database developer, these differences are painful because they involve sacrificing some very important bits of the relational model. Most importantly, Hekaton tables don’t currently support FOREIGN KEY constraints or CHECK constraints, and you can’t put the checks in triggers because there aren’t any DML triggers either. Constraints allow a relational designer to enforce relational integrity and data integrity. Without them, of course, ‘bad data’ can get into our Hekaton tables. There is no easy way of preventing it. For several classes of database and data, this is a show-stopper. One may regard all these restrictions regretfully, seeing limited opportunity to try out Hekaton with current databases, but perhaps there is also a sudden glow of recognition. Isn’t this how we all originally imagined table variables were going to be, back in SQL 2005? And they have much the same restrictions. Maybe, instead of pretending that a currently-designed database can be ‘Hekatonized’ with a few mouse clicks, we should redesign databases for SQL 2014 to replace table variables with Hekaton tables, exploiting this technology for fast intermediate processing, and for the most part forget, for now, the idea of trying to convert our base relational tables into Hekaton tables. Few database developers would be averse to having their working tables running an order of magnitude faster, as long as it didn’t compromise the integrity of the data in the base tables.

    Read the article

  • A Cautionary Tale About Multi-Source JNDI Configuration

    - by scott.s.nelson(at)oracle.com
    Here's a bit of fun with WebLogic JDBC configurations.  I ran into this issue after reading that p13nDataSource and cgDataSource-NonXA should not be configured as multi-source. There were some issues changing them to use the basic JDBC connection string and when rolling back to the bad configuration the server went "Boom".  Since one purpose behind this blog is to share lessons learned, I just had to post this. If you write your descriptors manually (as opposed to generating them using the WLS console) and put a comma-separated list of JNDI addresses like this: <jdbc-data-source-params> <jndi-name>weblogic.jdbc.jts.commercePool,contentDataSource, contentVersioningDataSource,portalFrameworkPool</jndi-name> <algorithm-type>Load-Balancing</algorithm-type> <data-source-list>portalDataSource-rac0,portalDataSource-rac1</data-source-list> <failover-request-if-busy>false</failover-request-if-busy> </jdbc-data-source-params> so long as the first address resolves, it will still work. Sort of.  If you call this connection to do an update, only one node of the RAC instance is updated. Other wonderful side-effects include the server refusing to start sometimes. The proper way to list the JNDI sources is one per node, like this: <jdbc-data-source-params> <jndi-name>weblogic.jdbc.jts.commercePool</jndi-name> <jndi-name>contentDataSource</jndi-name> <jndi-name>contentVersioningDataSource</jndi-name> <jndi-name>portalFrameworkPool</jndi-name> <algorithm-type>Load-Balancing</algorithm-type> <data-source-list>portalDataSource-rac0, portalDataSource-rac1, portalDataSource-rac2 </data-source-list> <failover-request-if-busy>false</failover-request-if-busy> </jdbc-data-source-params>(Props to Sandeep Seshan for locating the root cause)

    Read the article

  • Do you know the minimum builds to create on any branch?

    - by Martin Hinshelwood
    You should always have three builds on your team project. These should be setup and tested using an empty solution before you write any code at all. Figure: Three builds named in the format [TeamProject].[AreaPath]_[Branch].[Gate|CI|Nightly] for every branch.   These builds should use the same XAML build workflow; however you may set them up to run a different set of tests depending on the time it takes to run a full build. Gate – Only needs to run the smallest set of tests, but should run most if not all of the Unit Test. This is run before developers are allowed to check-in CI – This should run all Unit Tests and all of the automated UI tests. It is run after a developer check-in. Nightly – The Nightly build should run all of the Unit Tests, all of the Automated UI tests and all of the Load and Performance tests. The nightly build is time consuming and will run but once a night. Packaging of your Product for testing the next day may be done at this stage as well. Figure: You can control what tests are run and what data is collected while they are running. Note: We do not run all the tests every time because of the time consuming nature of running some tests, but ALL tests should be run overnight. Note: If you had a really large project with thousands of tests including long running Load tests you may need to add a Weekly build to the mix.     Figure: Bad example, you can’t tell what these builds do if they are in a larger list   Figure: Good example, you know exactly what project, branch and type of build these are for.   Technorati Tags: SSW,SSW Rules,VS2010,VS ALM,Team Build 2010,Team Build

    Read the article

  • Samba issue with sharing directories on NTFS/FAT32

    - by Microkernel
    I have some strange problems with Samba server. I am using samba Version 3.5.4 on Ubuntu 10.10. I have two Windows XP machines, one on VirtualBox on Ubuntu and another office laptop. Windows machine on VirtualBox has no issues in accessing the shared folders, but the laptop is not able to access all the shared content. The issue faced on laptop is the following. Shared folders on ext3 drives have no issues in accessing, but the contents shared on NTFS and FAT32 drives (mounted ones) are not accessible. When I try to open the shared folder, it asks for user name and password, but doesn't accept when I provide it. (Even if I provide admin login details). I changed workgroup value to the domain_name in office laptop, but still the problem persists. Here is the smdb.conf I am using: [global] workgroup = XXX.XXX.ORG server string = %h server (Samba, Ubuntu) map to guest = Bad User obey pam restrictions = Yes pam password change = Yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%m max log size = 1000 dns proxy = No usershare allow guests = Yes panic action = /usr/share/samba/panic-action %d guest ok = Yes [homes] comment = Home Directories [printers] comment = All Printers path = /var/spool/samba read only = No create mask = 0700 printable = Yes browseable = No [print$] comment = Samba server's CD-ROM path = /cdrom force user = nobody force group = nobody locking = No Workgroup was defined as "HOMENET" before, changed it to domain name on the office laptop thinking it was the problem, but for no avail.

    Read the article

  • How does one escape the GPL?

    - by tehtros
    DISCLAIMER I don't pretend to know anything about licensing. In fact, everything I say below may be completely false! Backstory: Recently, I've been looking for a decent game engine, and I think I've found one that I really like, Cafu Engine. However, they have a dual licensing plan, where everything you make with the engine is forced under GPL, unless you pay for a commercial license. I'm not saying that it's a bad engine, they even say that they are very relaxed about the licensing fees. However, the fact that it even involves the GPL scares me. So my question is basicly, how does one escape the GPL. Here's an example: The id Tech engine, also known as the Quake engine, or the Doom engine, was the base for the popular Source engine. However, the id Tech engine has been released under the GPL, and the Source engine is proprietary. Did Valve get a different license? Or did they do something to escape the GPL? Is there a way to escape the GPL? Or, if you use GPL'd source code as a base for another project, are you forced to use the GPL, and make your source code available to the world. Could some random person take the id Tech engine, modify it past the point of recognition, then use it as a proprietary engine for commercial products? Or are they required to make it open source. One last thing, I generally have no problem what-so-ever with open source. However I feel that open source has it's place, but that is not in the bushiness world.

    Read the article

  • What to include in metadata?

    - by shyam
    I'm wondering if there are any general guidelines or best practices regarding when to split data into a metadata format, as oppose to directly embedding it within the data. (Specific example below). My understanding of metadata is that it describes data (without the need to actually look at the data), allowing for data to be quickly search/filtered for easy access. Let's take for example a simple 3D model format. The actual data file itself is a binary file containing vertices and colors. Things like creation date, modified data and author name would be things that describe the binary data, so I would say these belong as metadata (outside of the binary file). But what if the application had no need to search or filter by these fields? Would it be acceptable to embed these fields directly into the binary data itself? Could they be duplicated in both the binary data and the meta data, or would this be considered bad practice? What about more ambiguous fields such as the model name, which could be considered part of the data itself, but also as data describing the binary data?... How do you decide which data to embed in the actual binary file, as opposed to separating into a more flexible metadata format? Thanks!

    Read the article

  • Why We Need UX Designers

    - by Tim Murphy
    Ok, so maybe this is really why I need UX designers.  While I have always had an interest in photography and can appreciate a well designed user interface putting one together is an entirely different endeavor.  Being color blind doesn’t help, but coming up with ideas is probably the biggest portion of the issue.  I can spot things that just don’t look like they work right, but what will? UX designers is an area that most companies do not spend much if any resources.  As they say, you only get one chance to make a first impression and and a poorly designed site or application is a bad first impression.  Given that they you would think that companies would invest more in appearance and usability. One of two things need to be done to rectify this issue.  Either we need to start educating our developers on user experience and design or we need to start finding ways to subsidize putting full time designers on our project teams.  Maybe it should be a time share type of situation, but something needs to be done.  As architects we need to impress on our project stakeholders the importance of User Experience and why it should be part of the budget.  If they hear it often enough eventually they may present it to you as their own idea. del.icio.us Tags: User Experience,UX,Application Design

    Read the article

  • Should I make the Cells in a Tiledmap as null when my player hits it

    - by Vishal Kumar
    I am making a Tile Based game using Libgdx. I took the idea from SuperKoalio platformer demo by Mario Zencher. When I wanted to implement Collectables in my game , I simply draw the coins using Tiled Map Editor. When my player hits that, I use to set that cell as null. Someday on this site suggested me not to do so... never use null. I agreed. What can be any other way. If I am using layer.setCell(x,y) to set the cell to any other cell... even if an transparent one .. my player seems to be stopped by an invisible object/hurdle. This is my code: for (Rectangle tile : tiles) { if (koalaRect.overlaps(tile)) { TiledMapTileLayer layer = (TiledMapTileLayer) map.getLayers().get(1); try{ type = layer.getCell((int) tile.x, (int) tile.y).getTile().getProperties().get("tileType").toString(); } catch(Exception e){ System.out.print("Exception in Tiles Property"+e); type="nonbreakable"; } //Let us destroy this cell if(("award".equals(type))){ layer.setCell((int) tile.x, (int) tile.y, null); listener.coin(); score+=100; test = ""+layer.getCell(0, 0).getTile().getProperties().get("tileType"); } //DOING THIS GIVES A BAD EFFECT if(("killer".equals(type))){ //player.health--; //layer.setCell((int) tile.x, (int) tile.y, layer.getCell(20,0)); } // we actually reset the player y-position here // so it is just below/above the tile we collided with // this removes bouncing :) if (player.velocity.y > 0) { player.position.y = (tile.y - Player.height); } Is this a right approach? OR I should create separate Sprite Class called Coin.

    Read the article

  • Microsoft and innovation: IIF() method

    This Saturday I was watching a couple of eLearning videos from TrainSignal (thanks to the subscription I have with Pluralsight) on Querying Microsoft SQL Server 2012 (exam 70-461). 'Innovation' by Microsoft I kept myself busy learning 'new' things about Microsoft SQL Server 2012 and some best practices. It was incredible 'innovative' to see that there is an additional logic function called IIF() available now: Returns one of two values depending on the value of a logical expression. IIF(lExpression, eExpression1, eExpression2) Ups, my bad... That's actually taken from the syntax page of Visual FoxPro 9.0 SP 2. And tada, at least seven (7+) years later, there's the recent IIF() Transact-SQL version of that function: Returns one of two values, depending on whether the Boolean expression evaluates to true or false in SQL Server 2012. IIF ( boolean_expression, true_value, false_value ) Now, that's what I call innovation! But we all know what happened to Visual FoxPro... It has been reincarnated in form of Visual Studio LightSwitch (and SQL Server). Enough ranting... Happy coding!

    Read the article

  • Source-control your BI Publisher reports

    - by Dmitry Nefedkin
    Version control systems (VCS) like Subversion, Git and the others has been widely adopted and became the must-have tool in any software development project. Source artifacts and checked out, modified, checked in, all the history of changes is tracked by the VCS.  But what if the development tool stores the source/configuration artifacts not in your laptop's hard drive, but in some shared repository instead? Well, we definitely need a way for export/import our artifacts from/to this repository.   Oracle BI Publisher report development approach is based on such a shared repository model (catalog), and starting from BI Publisher 11.1.1.5 Oracle ships Catalog Utility, which can be utilized to export/import the reports from the command line.  To start using the BI Publisher Catalog Utility you should: Go to the file system of the server where BI Publisher binaries has been installed and locate the following file: <MW_HOME>/Oracle_BI1/clients/bipublisher/BIPCatalogUtil.zip Copy the file to your local filesystem and unzip it. I will refer to this unzipped directory as <BIP_CLIENT_DIR> below If you do not want to pass server BI Publisher server URL, username and password during each invocation, modify the corresponding values inside <BIP_CLIENT_DIR>/config/xmlp-client-config.xml Open the terminal window and go to <BIP_CLIENT_DIR>/bin Make sure that the following environment variables are set: JAVA_HOME, ORACLE_HOME Now it's time to run the utility: if you are on Linux - just run BIPCatalogUtil.sh and pass the parameters according to the utility documentation if you are on MS Windows the bad news are that the command script for MS Windows is missing, and support.oracle.com note 1333726.1 says that a temporary solution is "create a .cmd file by setting up a classpath and copying the same commands from the .sh script". The good news are that I've created this script already,  please download the it from GitHub Hope you will find this utility useful during you day-by-day BI Publisher development. 

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-20

    - by Bob Rhubart
    New book: Oracle WebLogic Server 12c: First Look Congratulations to Michel Schildmeijer on the publication of his new book. Call for Nominations: Oracle Fusion Middleware Innovation Awards 2012 - Win a free pass to #OOW12 These awards honor customers for their cutting-edge solutions using Oracle Fusion Middleware. Either a customer, their partner, or an Oracle representative can submit the nomination form on behalf of the customer. Submission deadline: July 17. Winners receive a free pass to Oracle OpenWorld 2012 in San Francisco. ODTUG Kscope12 - June 24-28 - San Antonio, TX San Antonio, TX, June 24-28, 2012 Kscope12, sponsored by ODTUG, is your home for Application Express, BI and Oracle EPM, Database Development, Fusion Middleware, and MySQL training by the best of the best! Eclipse and Oracle Fusion Development - Free Virtual Event, July 10th Get more out of Eclipse with these useful resources. How to Create Multiple Internal Repositories for Oracle Solaris 11 | Albert White Albert White shows you how to create and manage internal repositories for release, development, and support versions of Solaris 11. Social Technology and the Potential for Organic Business Networks | Michael Fauscette "An organic business network driven company is the antithesis of a hierarchical, rigid, reactive, process-constrained, and siloed organization." Cloud Bursting between AWS and Rackspace | High Scalability Nati Shalom explains "cloud bursting," an interesting hybrid cloud model. Born-again cloud advocates finally see the light | David Linthicum "I can't help but wish that we keep an open mind about the next technology evolution when it begins and get religion earlier," says Linthicum. How to know that a method was run, when you didn’t write that method | RedStack Middleware A-Team blogger Mark Nelson shares a useful tip for those working with ADF. Thought for the Day "There does not now, nor will there ever exist, a programming language in which it is the least bit hard to write bad programs." — L. Flon Source: SoftwareQuotes.com

    Read the article

  • Would adding award points or game features to workplace software be viewed poorly amongst the programming community?

    - by Eric P
    So one of my responsibilities at work is to build an internal tool that helps the workers enter in all their information. It's an enterprise application that is similar to a Windows forms database tool. So it's not much different than like developing a Word + Excel combo application, but the average person in this workgroup is a 20-40 year old woman or a random chatty male type. Plus I know all of these people are heavily involved with Facebook on a daily basis. How bad would it be if I styled my new interface to be similar to what Facebook does. People could get award points and stuff when they fill out different types of forms and basically compete against each other like it was a game. When people had completed one, it would be posted on their wall and everyone could comment/like stuff just like in Facebook. And it would be like they are doing peer reviewing for fun. The rewards would be outstanding I would imagine. These people are so into Facebook and Facebook games that productivity would rise due to them trying to compete and earn points and achievements. Would this be taking advantage of the people by 'tricking them into working harder by giving them a game' or would it be viewed as something that would improve happiness at work?

    Read the article

  • Is '@' Error Suppression a Valid Technique for Testing for an Optional Array Key?

    - by MikeSchinkel
    Rarst and I were debating offline about the use of the '@' error suppression operator in PHP, specifically for use to test for existence of "optional" array keys, i.e. array keys that are being used as a switch here a their lack of existence in the array is functionally equivalent to the array having the key with a value equaling false. Here is pseudo-code for this scenario: function do_something( $args = array() ) { if ( @$args['switch'] ) { // Do something with this switch } // continue on... } vs. this approach: function do_something( $args = array() ) { if ( ! empty( $args['switch'] ) && $args['switch'] ) { // Do something with this switch } // continue on... } Of course in most use-cases, suppressing errors would not be A Good Thing(tm). However in this use-case where an array is passed with an optional element, it seems to me that it is actually a very good technique but I could be wrong and would like to hear other's opinions on the subject before I make up my mind. I do know that there are alleged performance hits for using the former approach but I'd like to know how they compare with the alternative and if they performance hits really matter in real world scenarios? P.S. I decided to post this because, after debating this offline with Rarst, he asked a more general question here on Programmers but didn't actually give a detailed example of the specific use-case we were debating. And since I'm pretty sure he'll want to use the out-of-context answers on that other question as justification for why the above is "bad" I decided I needed to get opinions on this specific use-case.

    Read the article

  • Where to store global enterprise properties?

    - by shylynx
    I'm faced with a crowd of java applications, which need different global enterprise wide properties for operation, for example: hostname of the central RDBMS, hostname and location of the central self-service portal, host location of central LDAP, host location of central mail server etc. Formally we build each application with a properties file, where all this properties are definied. But that's a very bad solution, because if the hostname of the mail server changes, we need to change the properties files for each application and deploy all applications again. Our idea is to centralize this properties, so that each application can ask for each property at runtime, for example: Idea: Put the properties file to an easy accessible file share. So if we need to change a property, each application uses the new properties. Idea: Put the properties to database. Main disadvantage: we need a dependency to database client libraries for each application. Idea: Put all applications into one big application server, that provides system properties for each application. Main disadvantage: needs deployment of each application to one application server. But that isn't a realistic scenario. Idea: Webservice that provides global enterprise wide properties. Main disadvantage: not very secure, because some properties are passwords or user credentials. What other alternatives are recommended? What is state of the art?

    Read the article

  • How to achieve a loosely coupled REST API but with a defined and well understood contract?

    - by BestPractices
    I am new to REST and am struggling to understand how one would properly design a REST system to both allow for loose coupling but at the same time allow a consumer of a REST API to understand the API. If, in my client code, I issue a GET request for a resource and get back XML, how do I know what to do with that xml? e.g. if it contains <fname>John</fname><lname>Smith</lname> how do I know that these refer to the concept of "first name", "last name"? Is it up to the person writing the REST API to define in documentation some place what each of the XML fields mean? What if producer of the API wants to change the implementation to be <firstname> instead of <fname>? How do they do this and notify their consumers that this change occurred? Or do the consumers just encounter the error and then look at the payload and figure out on their own that it changed? I've read in REST in Practice that using a WADL tool to create a client implementation based on the WADL (and hide the fact that you're doing a distributed call) is an "anti-pattern". But I was planning to do this-- at least then I would have a statically typed API call that, if it changed, I would know at compile time and not at run time. Why is this a bad thing to generate client code based on a WADL? And how do I know what to do with the links that returned in the response of a POST to a REST API? What defines this contract and gives true meaning to what each link will do? Please help! I dont understand how to go from statically-typed or even SOAP/RPC to REST!

    Read the article

  • Designing Content-Based ETL Process with .NET and SFDC

    - by Patrick
    As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer-specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry-standard PDFs, XML, or MS Office formats) and place in networked "queue" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List<ValidationErrors>). if (isValid) { // Upload using some wrapper for an ORM an someInterface.Upload(meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a "Worker" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!

    Read the article

  • can't access SAMBA shares on UBUNTU-server from other computers

    - by larand
    Installed UBUNTU-server 12.04 and configured /etc/samba/smb.conf as: #======================= Global Settings ======================= [global] workgroup = HEMMA server string = %h server (Samba, Ubuntu) security = user wins support = yes dns proxy = no log file = /var/log/samba/log.%m max log size = 1000 syslog = 0 panic action = /usr/share/samba/panic-action %d encrypt passwords = no passdb backend = tdbsam obey pam restrictions = yes unix password sync = yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes map to guest = bad user ############ Misc ############ usershare allow guests = yes #======================= Share Definitions ======================= [printers] comment = All Printers browseable = no path = /var/spool/samba printable = yes guest ok = no read only = yes create mask = 0700 # Windows clients look for this share name as a source of downloadable # printer drivers [print$] comment = Printer Drivers path = /var/lib/samba/printers browseable = yes read only = yes guest ok = no [Bilder original] comment = Original bilder path = /mnt/bilder/org browseable = yes read only = no guest ok = no create mask = 0755 [Bilder publika] comment = Bilder för allmän visning path = /mnt/bilder/public browseable = yes read only = yes guest ok = yes [Musik] comment = Musik path = /mnt/music/public browseable = yes read only = yes guest ok = yes I have a network setup around a 4G router "HUAWEI B593" where some computers are connected by WIFI and others by LAN. The server is connected by LAN. On one computer running windows XP I can see the server but are not allowed to acces them. On another computer on the WIFI-net running win7 I cannot see the server at all but I can ping the server and I can see the smb-protocoll is running when sniffing with wireshark. I don't primarily want to use passwords, computers on the lan and wifi should be able to connect without any login-procedure. I'm sure my config is not sufficient but have hard to understand how I should do. Theres a lot of descriptions on the net but most is old and none have been of any help. I'm also confused by the fact that I can not se the sever on my win7-machine even though it communicates with the samba-server. Would be very happy if anyone could spread some light over this mess.

    Read the article

  • GitHub: Are there external tools for managing issues list vs. project backlog

    - by DXM
    Recently I posted one of my the projects1 on GitHub and as I was exploring capabilities of the site, I noticed they have a rather decent issue tracking section. I want to use that section as a) other people can report bugs if they'd like and b) other people can see which bugs I'm aware of. However, as others have noted, issues list cannot be prioritized in order to create a project backlog. For now my backlog has been a text file, but I'd like to be able to have it integrated so the same information isn't maintained in different places. Having a fully ordered list, which is something we also practice at work, has been very useful as I can open one file, start with line 1 and fire off 2 or 3 items in one sitting without having to go back to a full issues/stories bucket. GitHub doesn't offer this. What GitHub does offer is a very nice and clean API so issues can easily be exported into anything else. I've searched to see if there are other websites (like Trello) that integrate with GitHub issues, but did not find anything. Does anyone know of such a product, service or offline tool? Those that use GitHub, what is your experience in managing backlog? I kinda hate the idea of manually managing two disconnected lists like some people seem to be doing with Wiki project pages. 1 - are shameless plugs allowed no this site? Searched but didn't find a definite answer. If it's bad practice, STOP and don't read further As a developer I got sick and tired of navigating to same set of folders 30 times a day, so I wrote a little, auto-collapsible utility that gets stuck to the desktop and allows easy access to the folders you constantly use.

    Read the article

  • Unhelpful Help

    - by Geoff N. Hiten
    Up until SQL 2012, I recommended installing Books On-Line (BOL) anywhere you installed SQL Server.  It made looking up reference information simpler, especially when you were on a server that didn’t have direct Internet access.  That all changed today.  I started the new Help Viewer with a local copy of BOL.  I actually found what I was looking for and closed the app.  Or so I thought.  Then I noticed something.  A little parasite had attached itself to my system.      Yep, the “Help” system left an “agent” behind.  Now I shouldn’t have to tell you that running application helper agents on server platforms is a bad idea.  And it gets worse.  There is no way to configure the app so that it does NOT start the parasite agent each time you restart help.  So the solution becomes do not install help on production server platforms.  Which is pretty unhelpful.

    Read the article

  • You Don't Want to Meet Orgad Kimchi in a Dark Alley

    - by rickramsey
    source Do you remember what those bad guys in the old Charles Bronson films looked like? They looked like Orgad Kimchi, that's what they looked like. When I met him at Oracle OpenWorld 2012, I realized I didn't want to meet him in the wrong alleyway of Budapest after dark. Neither do old versions of Oracle Solaris, which Orgad bends to his will with as much ease as he probably bends stray tourists to his will in Budapest, Kandahar, or Dagestan. How Orgad Made Oracle Database Migrate from Oracle Solaris 8 to Oracle Solaris 11 In this article, which we liked so much we reprinted it from his blog (please don't tell him!), Orgad explains how he head-butted an Oracle Database into submission. The database thought it was safe running in Oracle Solaris 8, but Orgad dragged its whimpering carcas into Oracle Solaris 11. How'd he do that? Well, if you had met Orgad in person, you wouldn't ask that question. Because you'd know he could have simply stared at it, and the database would have migrated on its own. But Orgad didn't do that. Instead, he stuffed an Oracle Solaris 8 Physical-to-Virtual (P2V) Archiver Tool into his leather trench coat, the one with the special pockets sown in by the East German Secret Police for several Uzis and their ammo, and walked into his data center in a way that reminded the survivors of this clip from Matrix Reloaded. The end result? The Oracle Database 10.2 that was running on Oracle Solaris 8 is now running inside a Solaris 10 branded zone in Oracle Solaris 11. With no complaints. Don't make Orgad angry. Read his article. - Rick Website Newsletter Facebook Twitter

    Read the article

  • How to expose game data in the game without a singelton?

    - by zardon
    I'm quite new to cocos2d and games programming, and am currently I am writing a game that is currently in Prototype stage. Everything is going okay, but I've realized a potentially big problem and I am not sure how to solve it. I am using a singelton to store a bunch of arrays for everything, a global list of planets, a global list of troops, a global list of products, etc. And only now I'm realizing that all of this will be in memory and this is the wrong way to do it. I am not storing files or anything on the disk just yet, with exception to a save/load state, which is a capture of everything. My game makes use of a map which allows you to select a planet, then it will give you a breakdown of that planets troops and resources, Lets use this scenario: My game has 20 planets. On which you can have 20 troops. Straight away that's an array of 400! This does not add the NPC, which is another 10. So, 20x10 = 200 So, now we have 600 all in arrays inside a Singelton. This is obviously very bad, and very wrong. Especially as the game scales in the amount of data. But I need to expose pretty much everything, especially on the map page, and I am not sure how else to do it. I've been told that I can use a controller for the map page which has the information I need for each planet, and other controllers for other items I require global display for. I've also thought about storing each planet's data in a save file, using initWithCoder however there could be a boatload of files on the user's device? I really don't want to use a database, mainly because I would need to translate NSObjects and non-NSObjects like CGRects and CGPoints and Colors into/from SQL. I am open to other ideas on how to store and read game data to prevent using a singelton to store everything, everywhere. Thanks for your time.

    Read the article

  • What I saw at TechEd North America 2014

    - by Brian Schroer
    Originally posted on: http://geekswithblogs.net/brians/archive/2014/05/19/teched-north-america-2014.aspxI was thrilled to be able to attend TechEd North America 2014 in Houston last week. I got to go to Orlando in 2008, and since then I’ve had to settle for watching the sessions online (which ain’t bad – They’re all available on Channel 9 for streaming or downloading. Here are links to the Developer Track sessions and to the sessions from all tracks.) The sessions I attended (with my favorites bolded) were: Shiny new stuff The Microsoft Application Platform for Developers: Create Applications That Span Devices and Services INTRODUCING: The Future of .NET on the Server DEEP DIVE: The Future of .NET on the Server ASP.NET: Building Web Application Using ASP.NET and Visual Studio The Next Generation of .NET for Building Applications The Future of Visual Basic and C# Stuff you can use now Building Rich Apps with AngularJS on ASP.NET Get the Most Out of Your Code Maps SignalR: Building Real-Time Applications with ASP.NET SignalR Performance Optimize Your ASP.NET Web App Modern Web and Visual Studio Visual Studio Power User: Tips and Tricks Debugging Tips and Tricks in Visual Studio 2013 In a world where the whole company uses TFS… Using Functional, Exploratory and Acceptance Testing to Release with Confidence A Practical View of Release Management for Visual Studio 2013 From Vanity to Value, Metrics That Matter: Improving Lean and Agile, Kanban, and Scrum Ain’t Nobody Got Time for That As usual, there were some time slots with nothing of interest and others with 5 things I wanted to see at the same time. Here are the sessions I’m still planning to watch… Getting Started with TypeScript Building a Large Scale JavaScript Application in TypeScript Modern Application Lifecycle Management Why a Hacker Can Own Your Web Servers in a Day! Async Best Practices for C# and Visual Basic Building Multi-Device Apps with the New Visual Studio Tooling for Apache Cordova Applying S.O.L.I.D. Principles in .NET/C# Native Mobile Application Development for iOS, Android, and Windows in C# and Visual Studio Using Xamarin Latest Innovations in Developing ASP.NET MVC Web Applications Zero to Hero: Untested to Tested with Microsoft Fakes Using Visual Studio Cool and Elegant ASP.NET Web Forms with HTML 5 for the Modern Web The Present and Future of .NET in a World of Devices and Services

    Read the article

  • Google music search: a better way to listen.

    - by anirudha
    somebody who want to listen music  pay much more to some online music store for online listening. otherwise they experience bad or low quality on YouTube. who is illegal  because uploader not have a permission or right to upload the document and their is no guarantee that they not put their ads or quality as same. now forget YouTube and all other because Google music search is much better just go their search the song by movies name or song and just click and listen. the quality is much better then other but it is not Google. the result they put comes from other website. i feel a thing goes wrong in Google music  search  that if i search “sajda” they never show me result about “sadka” because the word in common life use as same both. but the song may be starting from  “sajda” or “sadka”. i thing that they put the link that Do you means “Sadka” when i search sajda that it is better thing just like many online book store show the different keyword related to your keyword when you search their. like you search for a book on online book store they show you some different keyword when they serve the result and show related product or books when you go to a product page. after thinking all it is a better option for user to feel a better quality music without search hassle.

    Read the article

< Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >