Search Results

Search found 19600 results on 784 pages for 'chris long'.

Page 431/784 | < Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >

  • Do 10m external SAS cables work?

    - by Joachim Sauer
    According to the Wikipedia page external SAS cables are specified for up to 10m length. However, I found it pretty hard to actually find places that sell cables of that length. This made me wonder: Are there any known problems with using cables that are as long as this? Will it be more fragile? Slower? And if 10m is not suggested, would 6m be any more stable? A little background: for several reasons we'd like to put a tape library physically separate from our main server and 10m would be enough to put it on a separate floor.

    Read the article

  • Server load is spiking from 2 to 250

    - by Hakzona
    Hello, I'am using Wordpress 2.9. Webserver 8GB from Hostgator. I'am fighting with this problem for long time but still can not find the solution. Php switched to run as an apache module, Php 5 in DSO, Apache suEXEC, eaccelerator installed, but this configuration started making huge server load on server. Server load spiking from 1 to 250 (4 cpus) and server stops, after period of time its back again and stops in about 10 minutes. It started happening when hostgator support team installed eaccelerator on server. What can make this problem and how can I fix it?

    Read the article

  • Workflow workarounds: tracking individual column changes

    - by PeterBrunone
    This post is long overdue, but since the question keeps popping up on various SharePoint discussion lists, I figured I'd document the answer here (next time I can just post a link instead of typing the whole thing out again).In short, you cannot trigger a SharePoint workflow when a column changes; you can only use the ItemChanged event.  To get more granular, then, you need to add some extra bits.Let's say you have a list called "5K Races" with a column called StartTime, and you want to execute some actions when the StartTime value changes.  Simply perform the following steps:1)  Create an additional column (same datatype) called OldStartTime.2)  When the workflow starts, compare StartTime to OldStartTime.    a) If they are equal, then do nothing (end).    b) If they are NOT equal, proceed with your workflow.3)  If 2b, then set OldStartTime to the value of StartTime.By performing step 3, you ensure that by the end of the workflow, OldStartTime will be equal to StartTime -- this is important because the workflow will continue to run every time a particular item is changed, but by taking away the criterion that would cause the workflow to run the second time, you have avoided an endless loop situation. 

    Read the article

  • Does immutability entirely eliminate the need for locks in multi-processor programming?

    - by GlenPeterson
    Part 1 Clearly Immutability minimizes the need for locks in multi-processor programming, but does it eliminate that need, or are there instances where immutability alone is not enough? It seems to me that you can only defer processing and encapsulate state so long before most programs have to actually DO something. If a program performs actions on multiple processors, something needs to collect and aggregate the results. All this involves multi-process communication before, after, and possibly during some transformations. The start and end state of the machines are different. Can this always be done with no locks just by throwing out each object and creating a new one instead of changing the original (a crude view of immutability)? What cases still require locking? I'm interested in both the theoretical/academic answer and the practical/real-world answer. I know a lot of functional programmers like to talk about "no side effect" but in the "real world" everything has a side effect. Every processor click takes time and electricity and machine resources away from other processes. So I understand that there may be more than one perspective to answer this question from. If immutability is safe, given certain bounds or assumptions, I want to know what the borders of the "safety zone" are exactly. Some examples of possible boundaries: I/O Exceptions/errors Interfaces with programs written in other languages Interfaces with other machines (physical, virtual, or theoretical) Special thanks to @JimmaHoffa for his comment which started this question! Part 2 Multi-processor programming is often used as an optimization technique - to make some code run faster. When is it faster to use locks vs. immutable objects? Given the limits set out in Amdahl's Law, when can you achieve better over-all performance (with or without the garbage collector taken into account) with immutable objects vs. locking mutable ones? Summary I'm combining these two questions into one to try to get at where the bounding box is for Immutability as a solution to threading problems.

    Read the article

  • Technology Choice for a Client Application [on hold]

    - by AK_
    Not sure this is the right place to ask... I'm involved in the development of a new system, and now we are passing the demos stage. We need to build a proper client application. The platform we care most about is Windows, for now at least, but we would love to support other platforms, as long as it's free :-). Or at least very cheap. We anticipate two kinds of users: Occasional, coming mostly from the web. Professional, who would probably require more features, and better performance, and probably would prefer to see a native client. Our server exposes two APIs: A SOAP API, WCF behind the scenes, that supports 100% of the functionality. A small and very fast UDP + Binary API, that duplicates some of the functionality and is intended for the sake of performance for certain real-time scenarios. Our team is mostly proficient in .Net, C#, C++ development, and rather familiar with Web development (HTML, JavaScript). We are probably intending to develop two clients (for both user profiles), a web app, and a native app. Architecturally, we would like to have as many common components as possible. We would like to have several layers: Communication, Client Model, Client Logic, shared by both of the clients. We would also like to be able to add features to both clients when only the actual UI is a dual cost, and the rest is shared. We are looking at several technologies: WPF + Silverlight, Pure HTML, Flash / Flex (AIR?), Java (JavaFx?), and we are considering poking at WinRT(or whatever the proper name is). The question is which technology would you recommend and why? And which advantages or disadvantages will it have regarding our requirements?

    Read the article

  • ASP.NET, PostgreSQL, Mono, Ubuntu, Apache: Good idea?

    - by wreck_of_u
    I am a long-time Microsoft .NET developer. ASP.NET/MSSQL/IIS has been my bread & butter over the past 6 years. Now, I'm getting fond of the "lightweightness" of Ubuntu 10.xx server. I'm also loving SSH-ing it from my Windows 7 PC and installing apps using the awesome "apt-get" command. I've also been using HeidiSQL with MySQL now and loving it. It feels like Management Studio. However, i've read that PostgreSQL "may" be better than MySQL, and I did experience some MySQL overloads in my Moodle box (but this can be just a poor tweaking in my part). My question is, would it be a good idea to run this configuration? ASP.NET 4.0 PostgreSQL (the latest one I can apt-get!) Ubuntu 10.10 with Mono running on Apache Also, I assume I would be using Npgsql for Mono as my connector from ASP.NET to PostgreSQL?

    Read the article

  • Dependency Analyser

    - by tsutha
    I have been watching 3 videos put together by SSIS team on Dependency Analyser. I am happy Microsoft have made an effort to do something about it. I have been asking for this feature since SQL 2005 TAP. Still a long way to go before they catch up with competetion. It looks like it currently supports SSIS and SQL dependencies. Release note states its still in development and support only limited sets of tasks. You still would not know the impact of dropping a column on cubes, reports etc. I hope that changes by the time RTM comes out. I am struggling to understand why it is impossible to do it across the solution. Ideally if you have a BI solution which holds DB, SSIS, SSAS, SSRS & PPS projects I would like to right click and execute a dependency analyser, stating what impact would have if I drop / rename a specific column. Has anyone else looked at it and what are your thoughts? ThanksSutha  

    Read the article

  • Ubuntu 12.04 LTS won't install - never finishes please help

    - by Richard Higgins
    Want to try Ubuntu after using Windows for 30 years. Tried to install it 5 times on a Lenovo X120e notebook and twice on a Lenovo M57 desktop. No luck, worse than what Microsoft puts you through. I burned 12.04 LTS to disc. It installs up to the "Who Are You?" screen, then stops. Accepted the recommended computer name and lower case user name. I chose "log me in automatically." After that there is no progress bar, no rotating or pulsing button, nothing to indicate the Ubuntu has not died or fallen asleep. Is that how it is written? Never heard of a program that would take a long time to install while a user looked at a locked, dead screen. I just bought the M57 desktop for my son. It came with Ubuntu 10 something. I wanted to upgrade to 12.04 but it crashed, twice, to a DOS screen saying the pc lacked a certain "init" file. Various help screen commands did not help. On the X120e, I thought a partial-failed Ubuntu install was causing the problem, so I removed the drive and deleted the Ubuntu partition and replaced it. But same result. After I fill in my name, accept computer and user name, the "continue" button does not appear to work. I can go "back" but not forward. I have waited torturous hours. It doesn't take more than two hours to install, does it?any It is my own fault because of the high expectations I had for a sensible, hassle-free installation, but I am immensely disappointed. Thank you for any response

    Read the article

  • Changing website Url - Am I making an SEO mistake

    - by Denis
    I have a webiste with a .com domain that is a year old. The business is a shop based in Ireland and I have purchased a .ie domain. I plan to move the website over to the new domain, SEO Good or Bad idea? Old Url - SmythsOfTerenure.com | New Url - SmythsComputerRepair.ie (I am using Fake names and fictional business in the example Url's) The new domain has my main keyword in it. The old domain has my family name and business location (city district) It currently ranks high for lots of relevant keywords in Google with low traffic and low competition. Current website traffic is about 80 session per week. 80% of that traffic is Organic from Google. I am changing domain in an attempt to help SEO long term by having a CC TLD (.ie rather than .com) and having my main Keyword in the domain. I plan to do 301 re-directs from old to new and update GW Tools and G Analytics but am I making a mistake changing it at all as I know rankings may fall in the sort term. Homepage PR=0 and very few inbound links. Should I just leave it on the old domain? Or after a few months should I be back up ranking as well as I am now?

    Read the article

  • links for 2011-03-16

    - by Bob Rhubart
    InfoQ: Randy Shoup on Evolvable Systems Randy Shoup discusses evolvable systems: how to run different versions of a system in parallel during migrations, decoupling a system with events, schemas at eBay and much more. (tags: ping.fm) InfoQ: Heresy & Heretical Open Source: A Heretic's Perspective Douglas Crockford presents a debate existing around XML and JSON, and the negative effect of the Intellectual Property laws on open source software. (tags: ping.fm) Oracle Technology Network Architect Day: Toronto Registration is now open for this day-long event, to be held at the Sheraton Centre Toronto on April 21. Registration is free, but seating is limited.  (tags: oracle otn enterprisearchitecture cloudcomputing) Harry Foxwell: The Cloud is STILL too slow! "Considering the exponentially growing expectations of what the Web, that is, "the Cloud", is supposed to provide, today's Web/Cloud services are still way too slow." - Harry Foxwell (tags: oracle otn cloud) Architecture Standards - BPMN vs. BPEL for Business Process Management (Enterprise Architecture at Oracle) Path Shepherd gives props to Mark Nelson. (tags: entarch oracle otn) ORCLville: Oracle Fusion Applications: If I Were An AppsTech Oracle ACE Director Floyd Teter says:" If I were an Oracle AppsTech with an eye on Fusion Applications, there are three tools/technologies I'd want... (tags: oracle otn oracleace fusionapplications) Events OverviewYour brain on #entarch - OTN Architect Day - Denver - March 23 This free event includes sessions on Cloud Computing, Application Portfolio Rationalization, System Optimization, Event-Driven Architecture, plus food, beverages, an lots of peer networking. Seating is limited. (tags: oracle entarch otn)

    Read the article

  • Is there a way to see any tar progress per file?

    - by Svish
    I have a couple of big files that I would like to compress. I can do this with for example tar cvfj big-files.tar.bz2 folder-with-big-files The problem is that I can't see any progress, so I don't have a clue how long it will take or anything like that. Using v I can at least see when each file is completed, but when the files are few and large this isn't the most helpful. Is there a way I can get tar to show more detailed progress? Like a percentage done or a progress bar or estimated time left or something. Either for each single file or all of them or both.

    Read the article

  • When running a .jar application with OpenJDK, my keyboard becomes unresponsive?

    - by Mochan
    I recently downloaded a Java application with the .jar format, and had it running on my computer not so long ago. Now that I'm using my desktop instead of laptop temporarily, I want it to run. On my laptop it was a tremendous hassle to get OpenJDK to even run the application without it going black, and now on my desktop I don't have that problem. However, when I run the application, my keyboard becomes unresponsive and doesn't type at all. This is a really big problem because it's demands the use of a keyboard. It works as normal on my laptop though, and it works perfectly. But now on the desktop its completely useless. I don't know if there's like a keyboard driver I'm missing, but there shouldn't be because the keyboard runs flawlessly everywhere else. I'm using OpenJDK 6 because the 7 has the same 'black screen' I mentioned, so I need this to work within OpenJDK 6. Thanks so much in advance and I'll try to specify as many details as I can. M

    Read the article

  • Ranking drop after using reverse proxy for blog subdirectory and robots.txt for old blog subdomain

    - by user40387
    We have a 3Dcart store and a WordPress blog hosted on a separate server. Originally, we had a CNAME set up to point the blog to http://blog.example.com/. However, in our attempt to boost link-based and traffic-based authority on the main site, we've opted to do a reverse proxy to http://www.example.com/blog/. It’s been about two months since we finished the reverse proxy migration. It appears that everything is technically working as intended, including some robots and sitemap changes; the new URLs are even generating some traffic, as indicated on Google Analytics. While Google has been indexing the new URL locations, they’re ranking very poorly, even for non-competitive, long-tail keywords. Meanwhile, the old subdomain URLs are still ranking mostly as well as they used to (even though they aren’t showing meta titles and descriptions due to being blocked by robots.txt). Our working theory is that Google has an old index of the subdomain URLs, and is considering the new URLs to be duplicate content, since it’s being told not to crawl the subdomain and therefore can’t see the rel canonicals we have in place. To resolve this, we’ve updated the subdomain’s robot.txt to no longer block crawling and indexing. Theoretically, seeing the canonical tag on the subdomain pages will resolve any perceived duplicate content issues. In the meantime, we were wondering if anyone would have any other ideas. We are very concerned that we’ll be losing valuable traffic, as we’re entering our on season at the moment.

    Read the article

  • Developing Schema Compare for Oracle (Part 2): Dependencies

    - by Simon Cooper
    In developing Schema Compare for Oracle, one of the issues we came across was the size of the databases. As detailed in my last blog post, we had to allow schema pre-filtering due to the number of objects in a standard Oracle database. Unfortunately, this leads to some quite tricky situations regarding object dependencies. This post explains how we deal with these dependencies. 1. Cross-schema dependencies Say, in the following database, you're populating SchemaA, and synchronizing SchemaA.Table1: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(Col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1(Col1)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); We need to do a rebuild of SchemaA.Table1 to change Col1 from a VARCHAR2(100) to a NUMBER. This consists of: Creating a table with the new schema Inserting data from the old table to the new table, with appropriate conversion functions (in this case, TO_NUMBER) Dropping the old table Rename new table to same name as old table Unfortunately, in this situation, the rebuild will fail at step 1, as we're trying to create a NUMBER column with a foreign key reference to a VARCHAR2(100) column. As we're only populating SchemaA, the naive implementation of the object population prefiltering (sticking a WHERE owner = 'SCHEMAA' on all the data dictionary queries) will generate an incorrect sync script. What we actually have to do is: Drop foreign key constraint on SchemaA.Table1 Rebuild SchemaB.Table1 Rebuild SchemaA.Table1, adding the foreign key constraint to the new table This means that in order to generate a correct synchronization script for SchemaA.Table1 we have to know what SchemaB.Table1 is, and that it also needs to be rebuilt to successfully rebuild SchemaA.Table1. SchemaB isn't the schema that the user wants to synchronize, but we still have to load the table and column information for SchemaB.Table1 the same way as any table in SchemaA. Fortunately, Oracle provides (mostly) complete dependency information in the dictionary views. Before we actually read the information on all the tables and columns in the database, we can get dependency information on all the objects that are either pointed at by objects in the schemas we’re populating, or point to objects in the schemas we’re populating (think about what would happen if SchemaB was being explicitly populated instead), with a suitable query on all_constraints (for foreign key relationships) and all_dependencies (for most other types of dependencies eg a function using another function). The extra objects found can then be included in the actual object population, and the sync wizard then has enough information to figure out the right thing to do when we get to actually synchronize the objects. Unfortunately, this isn’t enough. 2. Dependency chains The solution above will only get the immediate dependencies of objects in populated schemas. What if there’s a chain of dependencies? A.tbl1 -> B.tbl1 -> C.tbl1 -> D.tbl1 If we’re only populating SchemaA, the implementation above will only include B.tbl1 in the dependent objects list, whereas we might need to know about C.tbl1 and D.tbl1 as well, in order to ensure a modification on A.tbl1 can succeed. What we actually need is a graph traversal on the dependency graph that all_dependencies represents. Fortunately, we don’t have to read all the database dependency information from the server and run the graph traversal on the client computer, as Oracle provides a method of doing this in SQL – CONNECT BY. So, we can put all the dependencies we want to include together in big bag with UNION ALL, then run a SELECT ... CONNECT BY on it, starting with objects in the schema we’re populating. We should end up with all the objects that might be affected by modifications in the initial schema we’re populating. Good solution? Well, no. For one thing, it’s sloooooow. all_dependencies, on my test databases, has got over 110,000 rows in it, and the entire query, for which Oracle was creating a temporary table to hold the big bag of graph edges, was often taking upwards of two minutes. This is too long, and would only get worse for large databases. But it had some more fundamental problems than just performance. 3. Comparison dependencies Consider the following schema: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100)); What will happen if we used the dependency algorithm above on the source & target database? Well, SchemaA.Table1 has a foreign key reference to SchemaB.Table1, so that will be included in the source database population. On the target, SchemaA.Table1 has no such reference. Therefore SchemaB.Table1 will not be included in the target database population. In the resulting comparison of the two objects models, what you will end up with is: SOURCE  TARGET SchemaA.Table1 -> SchemaA.Table1 SchemaB.Table1 -> (no object exists) When this comparison is synchronized, we will see that SchemaB.Table1 does not exist, so we will try the following sequence of actions: Create SchemaB.Table1 Rebuild SchemaA.Table1, with foreign key to SchemaB.Table1 Oops. Because the dependencies are only followed within a single database, we’ve tried to create an object that already exists. To fix this we can include any objects found as dependencies in the source or target databases in the object population of both databases. SchemaB.Table1 will then be included in the target database population, and we won’t try and create objects that already exist. All good? Well, consider the following schema (again, only explicitly populating SchemaA, and synchronizing SchemaA.Table1): SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); CREATE TABLE SchemaC.Table1 ( Col1 NUMBER);   CREATE TABLE SchemaC.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1); Although we’re now including SchemaB.Table1 on both sides of the comparison, there’s a third table (SchemaC.Table1) that we don’t know about that will cause the rebuild of SchemaB.Table1 to fail if we try and synchronize SchemaA.Table1. That’s because we’re only running the dependency query on the schemas we’re explicitly populating; to solve this issue, we would have to run the dependency query again, but this time starting the graph traversal from the objects found in the other database. Furthermore, this dependency chain could be arbitrarily extended.This leads us to the following algorithm for finding all the dependencies of a comparison: Find initial dependencies of schemas the user has selected to compare on the source and target Include these objects in both the source and target object populations Run the dependency query on the source, starting with the objects found as dependents on the target, and vice versa Repeat 2 & 3 until no more objects are found For the schema above, this will result in the following sequence of actions: Find initial dependenciesSchemaA.Table1 -> SchemaB.Table1 found on sourceNo objects found on target Include objects in both source and targetSchemaB.Table1 included in source and target Run dependency query, starting with found objectsNo objects to start with on sourceSchemaB.Table1 -> SchemaC.Table1 found on target Include objects in both source and targetSchemaC.Table1 included in source and target Run dependency query on found objectsNo objects found in sourceNo objects to start with in target Stop This will ensure that we include all the necessary objects to make any synchronization work. However, there is still the issue of query performance; the CONNECT BY on the entire database dependency graph is still too slow. After much sitting down and drawing complicated diagrams, we decided to move the graph traversal algorithm from the server onto the client (which turned out to run much faster on the client than on the server); and to ensure we don’t read the entire dependency graph onto the client we also pull the graph across in bits – we start off with dependency edges involving schemas selected for explicit population, and whenever the graph traversal comes across a dependency reference to a schema we don’t yet know about a thunk is hit that pulls in the dependency information for that schema from the database. We continue passing more dependent objects back and forth between the source and target until no more dependency references are found. This gives us the list of all the extra objects to populate in the source and target, and object population can then proceed. 4. Object blacklists and fast dependencies When we tested this solution, we were puzzled in that in some of our databases most of the system schemas (WMSYS, ORDSYS, EXFSYS, XDB, etc) were being pulled in, and this was increasing the database registration and comparison time quite significantly. After debugging, we discovered that the culprits were database tables that used one of the Oracle PL/SQL types (eg the SDO_GEOMETRY spatial type). These were creating a dependency chain from the database tables we were populating to the system schemas, and hence pulling in most of the system objects in that schema. To solve this we introduced blacklists of objects we wouldn’t follow any dependency chain through. As well as the Oracle-supplied PL/SQL types (MDSYS.SDO_GEOMETRY, ORDSYS.SI_COLOR, among others) we also decided to blacklist the entire PUBLIC and SYS schemas, as any references to those would likely lead to a blow up in the dependency graph that would massively increase the database registration time, and could result in the client running out of memory. Even with these improvements, each dependency query was taking upwards of a minute. We discovered from Oracle execution plans that there were some columns, with dependency information we required, that were querying system tables with no indexes on them! To cut a long story short, running the following query: SELECT * FROM all_tab_cols WHERE data_type_owner = ‘XDB’; results in a full table scan of the SYS.COL$ system table! This single clause was responsible for over half the execution time of the dependency query. Hence, the ‘Ignore slow dependencies’ option was born – not querying this and a couple of similar clauses to drastically speed up the dependency query execution time, at the expense of producing incorrect sync scripts in rare edge cases. Needless to say, along with the sync script action ordering, the dependency code in the database registration is one of the most complicated and most rewritten parts of the Schema Compare for Oracle engine. The beta of Schema Compare for Oracle is out now; if you find a bug in it, please do tell us so we can get it fixed!

    Read the article

  • Disable of discrete/native graphics on MacBook Retina when using apps like VMware, VLC, etc

    - by badkitteh
    I got a MacBook Pro Retina a few months ago and I'm really happy with it. However, since I do a lot of work in different environments/OSes, I make heavy use of VMware to have them with me when I'm on the road. The MBPr has a great battery life - as long as integrated graphics are being used. Unfortunately as soon as I launch VMware, the MacBook switches to discrete graphics and battery life is effectively halved. I noticed that after installing gfxCardStatus, and now I'm wondering if there is a way to force the integrated graphics being used all of the time, so I can enjoy maximum battery life. Thanks.

    Read the article

  • Is it OK to reoccupy my old GitHub username to protect repository redirections?

    - by Idan Arye
    I'm considering changing my GitHub username from the old alias I was using as a kid to my real name. I'm concerned about my repository URLs. GitHub will redirect the old URLs, but if someone creates a new account using my old username and creates a repository with the same name as one of my repositories, the URL redirection will break and the URL will lead to their repository, not mine. Now, this is understandable, and GitHub recommends to not count on the redirect in the long term, and update all the remotes, but I'm concerned about some Vim plugins I'm hosting on GitHub. It's a common practice to manage Vim plugins with Git(either as separate repositories or as submodules), and if one of the plugins' remotes break you'll get error messages when you try to batch-update all your plugins(it happened to me once...). It's not that hard to solve, and the chances that'll happen are slim, but I would still like to avoid causing trouble to the users of my plugins... To prevent this, I think to create a new account with my old username. That way I can avoid the risk of someone else taking my old username and breaking the redirects of my old repositories. While researching this approach I've found GitHub's Name Squatting Policy. According to that policy, GitHub can delete or rename inactive accounts. To my understanding, they do this to prevent Cybersquatting, but surely this isn't the case with my fake account - I'm not holding someone else's name in an attempt to sell it to them, I'm merely occupying a name I was using to protect my old URLs... So, is it acceptable to go with this plan an create a fake account with my old username?

    Read the article

  • ERP/CRM Systems. Desktop Based ? Web based?

    - by Parhs
    Hello guys... I have seen 2-3 ERPs in action. I am wondering what is better. Desktop based application or webbased displayed on a browser. My first expirience was with a web based ERP when i was 14 years old.. It was web based and terribly slow... For most simple task you had to do lots of clicks... no keyboard support ..... Pages took ages to load. Last year i worked for migrating to a newer computer some old terminal based cobol application. The computer that worked till today and still has no problem was from 1993. The user interface ofcourse was textbased.. The speed that guys placed orders was amazing! just typing the name of the customer , then 5-10 keys to add a product to order.... Comparing to this ERP the page for placing orders Link (click sales orders) seems terribly slow to add a product... No keyboard shortcut works to save what you added and generally i believe you need 4 times more time to place an order compared to the text interface... Having to use both mouse and keyboard for this task is BAD and sadistic... So how can tek heck these people ever use a system like that ??? So in the long run desktop application seems the only way... Ofcourse browsers support shortcuts but the way to overide the defaults that browsers uses isnt cross compatible... That is a hudge problem. Finnaly, if we MUST/forced use cloud in near future what about keyboard shortcuts?? I feel confused... I have seen converters of desktop applications to browser applications but are SLOW as hell... The question is what about user friendliness?What kind of application would you use?

    Read the article

  • 32-bit java dominates my PATH magically

    - by Kos
    I have a 32-bit Java installed just for Chrome and 64-bit Java JDK for everything else. When I type java -version in the cmd, the 32-bit Java answers: C:\>java -version java version "1.6.0_26" Java(TM) SE Runtime Environment (build 1.6.0_26-b03) Java HotSpot(TM) Client VM (build 20.1-b02, mixed mode, sharing) This is the 32-bit JRE installed for Chrome (the installer name was chromeinstall.exe). However, I'd like the default Java to be this one: C:\>"Program Files\Java\jre6\bin\java.exe" -version java version "1.6.0_26" Java(TM) SE Runtime Environment (build 1.6.0_26-b03) Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode) And for the fun part, only the 64-bit one is in PATH! C:\>echo %PATH% C:\Windows\system32;C:\Program Files\Java\jre6\bin (snipped irrelevant entries) So long story short: 64-bit JRE is in PATH, but 32-bit JRE is ran by default. What is happening here? How to fix it? Tried reinstalling the 64-bit JDK as a whole, didn't help.

    Read the article

  • Working with fubar/refuctored code

    - by Keyo
    I'm working with some code which was written by a contractor who left a year ago leaving a number of projects with buggy, disgustingly bad code. This is what I call cowboy PHP, say no more. Ideally I'd like to leave the project as is and never touch it again. Things break, requirements change and it needs to be maintained. Part A needs to be changed. There is a bug I cannot reproduce. Part A is connect to parts B D and E. This kind of work gives me a headache and makes me die a little inside. It kills my motivation and productivity. To be honest I'd say it's affecting my mental health. Perhaps being at the start of my career I'm being naive to think production code should be reasonably clean. I would like to hear from anyone else who has been in this situation before. What did you do to get out of it? I'm thinking long term I might have to find another job. Edit I've moved on from this company now, to a place where idiots are not employed. The code isn't perfect but it's at least manageable and peer reviewed. There are a lot of people in the comments below telling me that software is messy like this. Sure I don't agree with the way some programmers do things but this code was seriously mangled. The guy who wrote it tried to reinvent every wheel he could, and badly. He stopped getting work from us because of his bad code that nobody on the team could stand. If it were easy to refactor I would have. Eventually after many 'just do this small 10minute change' situations had ballooned into hours of lost time (regardless of who on the team was doing the work) my boss finally caved in it was rewritten.

    Read the article

  • How to Generate a Create Table DDL Script Along With Its Related Tables

    - by Compudicted
    Have you ever wondered when creating table diagrams in SQL Server Management Studio (SSMS) how slickly you can add related tables to it by just right-clicking on the interesting table name? Have you also ever needed to script those related tables including the master one? And you discovered you have dozens of related tables? Or may be no SSMS at your disposal? That was me one day. Well, creativity to the rescue! I Binged and Googled around until I found more or less what I wanted, but it was all involving T-SQL, yeah, a long and convoluted CROSS APPLYs, then I saw a PowerShell solution that I quickly adopted to my needs (I am not referencing any particular author because it was a mashup): 1: ########################################################################################################### 2: # Created by: Arthur Zubarev on Oct 14, 2012 # 3: # Synopsys: Generate file containing the root table CREATE (DDL) script along with all its related tables # 4: ########################################################################################################### 5:   6: [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null 7:   8: $RootTableName = "TableName" # The table name, no schema name needed 9:   10: $srv = new-Object Microsoft.SqlServer.Management.Smo.Server("TargetSQLServerName") 11: $conContext = $srv.ConnectionContext 12: $conContext.LoginSecure = $True 13: # In case the integrated security is not used uncomment below 14: #$conContext.Login = "sa" 15: #$conContext.Password = "sapassword" 16: $db = New-Object Microsoft.SqlServer.Management.Smo.Database 17: $db = $srv.Databases.Item("TargetDatabase") 18:   19: $scrp = New-Object Microsoft.SqlServer.Management.Smo.Scripter($srv) 20: $scrp.Options.NoFileGroup = $True 21: $scrp.Options.AppendToFile = $False 22: $scrp.Options.ClusteredIndexes = $False 23: $scrp.Options.DriAll = $False 24: $scrp.Options.ScriptDrops = $False 25: $scrp.Options.IncludeHeaders = $True 26: $scrp.Options.ToFileOnly = $True 27: $scrp.Options.Indexes = $False 28: $scrp.Options.WithDependencies = $True 29: $scrp.Options.FileName = 'C:\TEMP\TargetFileName.SQL' 30:   31: $smoObjects = New-Object Microsoft.SqlServer.Management.Smo.UrnCollection 32: Foreach ($tb in $db.Tables) 33: { 34: Write-Host -foregroundcolor yellow "Table name being processed" $tb.Name 35: 36: If ($tb.IsSystemObject -eq $FALSE -and $tb.Name -eq $RootTableName) # feel free to customize the selection condition 37: { 38: Write-Host -foregroundcolor magenta $tb.Name "table and its related tables added to be scripted." 39: $smoObjects.Add($tb.Urn) 40: } 41: } 42:   43: # The actual act of scripting 44: $sc = $scrp.Script($smoObjects) 45:   46: Write-host -foregroundcolor green $RootTableName "and its related tables have been scripted to the target file." Enjoy!

    Read the article

  • API Class with intensive network requests

    - by Marco Acierno
    I'm working an API which works as "intermediary" between a REST API and the developer. In this way, when the programmer do something like this: User user = client.getUser(nickname); it will execute a network request to download from the service the data about the user and then the programmer can use the data by doing things like user.getLocation(); user.getDisplayName(); and so on. Now there are some methods like getFollowers() which execute another network request and i could do it in two ways: Download all the data in the getUser method (and not only the most important) but in this way the request time could be very long since it should execute the request to various urls Download the data when the user calls the method, it looks like the best way and to improve it i could cache the result so the next call to getFollowers returns immediately with the data already download instead of execute again the request. What is the best way? And i should let methods like getUser and getFollowers stop the code execution until the data is ready or i should implement a callback so when the data is ready the callback gets fired? (this looks like Javascript)

    Read the article

  • Unwanted Chinese language got set in system settings

    - by Registered User
    I was discussing on the Ubuntu users list how to type in in Hindi (Indic language) in Libre Office and about a package installation problem. I have made some changes in system settings, following suggestions by some users. However, this morning when I did a reboot I am unable to see English as my default language. My system is showing some Chinese characters which I do not understand. All I wanted was to use Libre Office for a particular document in Hindi. What happened is that even Gmail is opening in Chinese. The system settings folder and others are also opening in Chinese. I am unable to use the system now. I have uploaded the snapshots here: please have a look. Upon a reboot, I was asked to rename all folders. Gmail opening in Chinese This is how menu on my system looks: half English and half Chinese Notice that in the third snapshot the calendar and menu are appearing in Chinese. I want the original US English menus and folder names back. I just wanted to type a document with Lohit Hindi font in Libre Office. I use Ubuntu 11.10. I do not use Unity, only Gnome desktop. I installed gnome-session-fallback a long time back and have been using that ever since. How do I get back to all English submenus and English folder names? I have a US English Keyboard and I use only US English. This thing which is now somehow set is unwanted.

    Read the article

  • Easy shorewall question : allow ips to DNAT

    - by llazzaro
    Hello, At my home network I had a transparent proxy. This is the rule that forward all 80 traffic to my squid3.1 server at DMZ DNAT loc:!10.0.0.126 dmz:172.16.0.198:3128 tcp 80 - !172.16.0.198 Ok, I need to add more ips to avoid transparent proxy. I tried loc:!10.0.0.134,!10.0.0.126...but didnt work (also similars like [ip0,ip1]. I tried to google the answer cant find it (sorry no matches, not searching the right keywords) also I tried to read the docs, but they are really long (and indexes dont help me). Thanks!

    Read the article

  • 3.5mm Headphone and Mic --> 3.5mm Headphone and 3.5mm Mic

    - by Taylor Price
    I am looking for a way to take the plug from a ear bud and mic set (e.g. the VModa Vibe Duo) and split it into separate headphone and mic 3.5mm plugs so that I can plug into my computer. Has anybody seen such a splitter? (Yes, I have done some quick searching for it) Thanks in advance. The reason why if you care to read this far is that I work from home and long conference calls with a big set of over-the-hear headphones can get tiring. Thus, I'd like to try with a nice ear bud/mic set and see if that is more comfortable.

    Read the article

  • Script language native extensions - avoiding name collisions and cluttering others' namespace

    - by H2CO3
    I have developed a small scripting language and I've just started writing the very first native library bindings. This is practically the first time I'm writing a native extension to a script language, so I've run into a conceptual issue. I'd like to write glue code for popular libraries so that they can be used from this language, and because of the design of the engine I've written, this is achieved using an array of C structs describing the function name visible by the virtual machine, along with a function pointer. Thus, a native binding is really just a global array variable, and now I must obviously give it a (preferably good) name. In C, it's idiomatic to put one's own functions in a "namespace" by prepending a custom prefix to function names, as in myscript_parse_source() or myscript_run_bytecode(). The custom name shall ideally describe the name of the library which it is part of. Here arises the confusion. Let's say I'm writing a binding for libcURL. In this case, it seems reasonable to call my extension library curl_myscript_binding, like this: MYSCRIPT_API const MyScriptExtFunc curl_myscript_lib[10]; But now this collides with the curl namespace. (I have even thought about calling it curlmyscript_lib but unfortunately, libcURL does not exclusively use the curl_ prefix -- the public APIs contain macros like CURLCODE_* and CURLOPT_*, so I assume this would clutter the namespace as well.) Another option would be to declare it as myscript_curl_lib, but that's good only as long as I'm the only one who writes bindings (since I know what I am doing with my namespace). As soon as other contributors start to add their own native bindings, they now clutter the myscript namespace. (I've done some research, and it seems that for example the Perl cURL binding follows this pattern. Not sure what I should think about that...) So how do you suggest I name my variables? Are there any general guidelines that should be followed?

    Read the article

< Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >