Search Results

Search found 114 results on 5 pages for 'hammer bro'.

Page 4/5 | < Previous Page | 1 2 3 4 5  | Next Page >

  • Common SOA Problems by C2B2

    - by JuergenKress
    SOA stands for Service Oriented Architecture and has only really come together as a concrete approach in the last 15 years or so, although the concepts involved have been around for longer. Oracle SOA Suite is based around the Service Component Architecture (SCA) devised by the Open SOA collaboration of companies including Oracle and IBM. SCA, as used in SOA suite, is designed as a way to crystallise the concepts of SOA into a standard which ensures that SOA principles like the separation of application and business logic are maintained. Orchestration or Integration? A common thing to see with many people who are beginning to either build a new SOA based infrastructure, or move an old system to be service oriented, is confusion in the purpose of SOA technologies like BPEL and enterprise service buses. For a lot of problems, orchestration tools like BPEL or integration tools like an ESB will both do the job and achieve the right objectives; however it’s important to remember that, although a hammer can be used to drive a screw into wood, that doesn’t mean it’s the best way to do it. Service Integration is the act of connecting components together at a low level, which usually results in a single external endpoint for you to expose to your customers or other teams within your organisation – a simple product ordering system, for example, might integrate a stock checking service and a payment processing service. Process Orchestration, however, is generally a higher level approach whereby the (often externally exposed) service endpoints are brought together to track an end-to-end business process. This might include the earlier example of a product ordering service and couple it with a business rules service and human task to handle edge-cases. A good (but not exhaustive) rule-of-thumb is that integrations performed by an ESB will usually be real-time, whereas process orchestration in a SOA composite might comprise processes which take a certain amount of time to complete, or have to wait pending manual intervention. BPEL vs BPMN For some, with pre-existing SOA or business process projects, this decision is effectively already made. For those embarking on new projects it’s certainly an important consideration for those using Oracle SOA software since, due to the components included in SOA Suite and BPM Suite, the choice of which to buy is determined by what they offer. Oracle SOA suite has no BPMN engine, whereas BPM suite has both a BPMN and a BPEL engine. SOA suite has the ESB component “Mediator”, whereas BPM suite has none. Decisions must be made, therefore, on whether just one or both process modelling languages are to be used. The wrong decision could be costly further down the line. Design for performance: Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: C2B2,SOA best practice,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Common SOA Problems by C2B2

    - by JuergenKress
    SOA stands for Service Oriented Architecture and has only really come together as a concrete approach in the last 15 years or so, although the concepts involved have been around for longer. Oracle SOA Suite is based around the Service Component Architecture (SCA) devised by the Open SOA collaboration of companies including Oracle and IBM. SCA, as used in SOA suite, is designed as a way to crystallise the concepts of SOA into a standard which ensures that SOA principles like the separation of application and business logic are maintained. Orchestration or Integration? A common thing to see with many people who are beginning to either build a new SOA based infrastructure, or move an old system to be service oriented, is confusion in the purpose of SOA technologies like BPEL and enterprise service buses. For a lot of problems, orchestration tools like BPEL or integration tools like an ESB will both do the job and achieve the right objectives; however it’s important to remember that, although a hammer can be used to drive a screw into wood, that doesn’t mean it’s the best way to do it. Service Integration is the act of connecting components together at a low level, which usually results in a single external endpoint for you to expose to your customers or other teams within your organisation – a simple product ordering system, for example, might integrate a stock checking service and a payment processing service. Process Orchestration, however, is generally a higher level approach whereby the (often externally exposed) service endpoints are brought together to track an end-to-end business process. This might include the earlier example of a product ordering service and couple it with a business rules service and human task to handle edge-cases. A good (but not exhaustive) rule-of-thumb is that integrations performed by an ESB will usually be real-time, whereas process orchestration in a SOA composite might comprise processes which take a certain amount of time to complete, or have to wait pending manual intervention. BPEL vs BPMN For some, with pre-existing SOA or business process projects, this decision is effectively already made. For those embarking on new projects it’s certainly an important consideration for those using Oracle SOA software since, due to the components included in SOA Suite and BPM Suite, the choice of which to buy is determined by what they offer. Oracle SOA suite has no BPMN engine, whereas BPM suite has both a BPMN and a BPEL engine. SOA suite has the ESB component “Mediator”, whereas BPM suite has none. Decisions must be made, therefore, on whether just one or both process modelling languages are to be used. The wrong decision could be costly further down the line. Design for performance: Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: C2B2,SOA best practice,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Should programmers do Pro Bono work? where are the code public defenders?

    - by Tj Kellie
    How many projects are people doing based on the Bro Bono publico ideals versus working for the highest wage or potential for a cash-in-buy-out payday? For years lawyers have been called out for excessive gathering of wealth from high bill rates and huge settlement deals, hiring out their knowledge and skills to the highest bidders. People call for them to do more for free, use the laws and their time to defend or further some cause thats in the public's best interest. Is professional software development that different? So many bright people and so much knowledge of complex systems. Do you think that there is enough of a "Pro Bono" movement to solve the social and public problems in the industry right now? If so what are the examples to point to? OLPC? NOTE: Saying that open source software is the same as pro bono misses the point completely. I was looking for specific projects with a social context, not just group-sourcing for free software. Just because your not making anyone pay for your software does not mean its doing anyone any good. I'm not calling out manual enforcement of pro bono work for programmers, really just want some objective opinions and concrete examples of social-minded software/tech development projects like the One Laptop Per Child project. I'm sure open source would be a natural tie-in for some.

    Read the article

  • running scala apps with java -jar

    - by paintcan
    Yo dawgs, I got some problems with the java. Check it out. sebastian@sebastian-desktop:~/scaaaaaaaaala$ java -cp /home/sebastian/.m2/repository/org/scala-lang/scala-library/2.8.0.RC3/scala-library-2.8.0.RC3.jar:target/scaaaaaaaaala-1.0.jar scaaalaaa.App Hello World! That's cool, right, but how bout this: sebastian@sebastian-desktop:~/scaaaaaaaaala$ java -cp /home/sebastian/.m2/repository/org/scala-lang/scala-library/2.8.0.RC3/scala-library-2.8.0.RC3.jar -jar target/scaaaaaaaaala-1.0.jar Exception in thread "main" java.lang.NoClassDefFoundError: scala/Application at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClassCond(ClassLoader.java:632) at java.lang.ClassLoader.defineClass(ClassLoader.java:616) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141) at java.net.URLClassLoader.defineClass(URLClassLoader.java:283) at java.net.URLClassLoader.access$000(URLClassLoader.java:58) at java.net.URLClassLoader$1.run(URLClassLoader.java:197) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:248) at scaaalaaa.App.main(App.scala) Caused by: java.lang.ClassNotFoundException: scala.Application at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:248) ... 13 more Wat the heck? Any idea why the first works and not the 2nd? How do I -jar my scala?? Thanks in advance bro.

    Read the article

  • OS X, Chrome, and Spaces annoyance

    - by David Hollman
    Here's my problem: I use Google Chrome as my web browser on MacOS X Snow Leopard. I am a keyboard shortcut addict, and I use QuickSilver to create keyboard shortcuts for anything I can. One of the most common things that I do is to open a new web browser window. But I use Spaces frequently to partition my tasks that I am currently working on, and when I open a web browser or web page with a QuickSilver trigger, spaces switches to the last space that I used Chrome on and opens a new tab, which often distracts me for hours because it brings me to a different space and thus a different task. I can fix this by right-clicking on the Google Chrome icon and clicking the "New Window" option, which opens a new window on the current space. I have tried to compose an AppleScript to do something like this, with no success. It has become a serious problem. Back when I used Firefox, I solved the problem by changing a preference item that says "Always open pop-up links in a new window" or something like that, which was kind of a sledge hammer approach, but it worked. I can always go back to Firefox, but I thought I'd ask my question here first. Anyone with any ideas?

    Read the article

  • VirtualBox management interface unreliability

    - by Arlen Cuss
    I'm using VirtualBox 3.2.8_OSE with 20 VMs running, and everything's going fine. I find that if I hammer the VBoxManage interface, all sorts of interesting things happen, usually necessitating either a restart of the VM in question, or of all VMs. For instance, if I use VBoxManage guestcontrol execute to run processes, after a few hours of using it maybe once or twice a minute on any given VM, it'll mysteriously start reporting VERR_NOT_IMPLEMENTED and refusing to do anything—sometimes trying to restart /usr/sbin/VBoxService on the VM itself will get it back in working order, but often it won't, and in the meantime, no data can be collected using VBoxManage. Such data includes the VM's IP, so if I hadn't recorded it earlier, I'm usually in trouble and have no option but to portscan the network for it, or kill the VM's process on the host manually and restart it. This one I haven't narrowed down yet, but it seems that even using VBoxManage guestproperty get (to retrieve a machine's IP) frequently and rapidly is enough to cause all VMs' management interfaces to die. The processes are still running fine, but VBoxManage reports them all as "powered off". In the meantime, another process somewhere in the system seems to have decided that their being powered off means they need to be powered on again, and suddenly I have 2x the number of VBoxHeadless processes running than I used to. Has anyone else seen behaviour like this? Is there any workaround? This is a serious impediment to my work, as I've had to resort to a lot of (hacky) caching of data and rate-limiting how often I call VBoxManage, just in case I accidentally bring 20 VMs to their knees.

    Read the article

  • Computer making strange sound when turned on, ever since power outage

    - by Dot NET
    Recently we experienced a power outage, and the PC was off. However, once the power came back, I switched on the PC and heard a strange noise - almost as if the hard disk or fans were struggling to work. I can't really describe the sound, but it's a laboured, loud sound almost like a jack-hammer. This has been persisting ever since the power outage, however the noise stops after around 10 minutes or so, and doesn't start again until the computer is turned off and on again. At first I thought it had something to do with the HDD, but all my files are intact, chkdsk did not report any issues and performance is 100% unchanged, even in games (so the gfx card is fine, and so is the HDD most likely). My PC setup basically has around 3 cooling fans, but I'm not sure if it's one of these either as the noise actually stops after 10 minutes or so, and if I leave the PC on for 4 hours (for example) the noise never starts again. It's there solely when turning on the PC. I haven't got a UPS, and it's important to note that the computer was not on when the power went out - it was merely plugged in. I then promptly unplugged the PC once the power was out, and only plugged it in again when the power came back. Could it be the power supply? Unfortunately I can't open my tower as I would void the warranty. Are there any tests which I could carry out without voiding the warranty?

    Read the article

  • How can I get multiple video cards to work on linux?

    - by user17943
    I installed fedora 12. I have 2 ATI cards that I used to use on windows to run 4 monitors. A recurring problem has been to get them detected in linux. Only my secondary card is picked up linux. When I manage the displays it detects the 2 monitors connected that card. What are the specific steps I should take to get the second card detected? Supposedly there is a tool system-config-xfree. I don't have it, yum can't find it. Also I heard it has something to do with editing some xorg.conf file or something to that effect. I have absolutely no idea how to find the "bus id" of my card, or lookup the horizontal refresh rates, etc.. I would probably have no problem following the documentation & editing the file if I knew a good way to find these values. Someone also suggested installing linux twice and saving the xorg.conf it generates each time (with different card each time) and then merging the two by hand. That is like killing a fly with a hammer though, when I do this again and again in the future It'd be nice to not have to take twice as long. Thanks

    Read the article

  • Query Logging in Analysis Services

    - by MikeD
    On a project I work on, we capture the queries that get executed on our Analysis Services instance (SQL Server 2008 R2) and use the table for helping us to build aggregations and also we aggregate the query log daily into a data warehouse of operational data so we can track usage of our Analysis databases by users over time. We've learned a couple of helpful things about this logging that I'd like to share here.First off, the query log table automatically gets cleaned out by SSAS under a few conditions - schema changes to the analysis database and even regular data and aggregation processing can delete rows in the table. We like to keep these logs longer than that, so we have a trigger on the table that copies all rows into another table with the same structure:Here is our trigger code:CREATE TRIGGER [dbo].[SaveQueryLog] on [dbo].[OlapQueryLog] AFTER INSERT AS       INSERT INTO dbo.[OlapQueryLog_History] (MSOLAP_Database, MSOLAP_ObjectPath, MSOLAP_User, Dataset, StartTime, Duration)      SELECT MSOLAP_Database, MSOLAP_ObjectPath, MSOLAP_User, Dataset, StartTime, Duration FROM inserted Second, the query logging process is "best effort" - if SSAS cannot connect to the database listed in the QueryLogConnectionString in the Analysis Server properties, it just stops logging - it doesn't generate any errors to the client at all, which is a good thing. Once it stops logging, it doesn't retry later - an hour, a day, a week, or even a month later, so long as the service doesn't restart.That has burned us a couple of times, when we have made changes to the service account that is used for SSAS, and that account doesn't have access to the database we want to log to. The last time this happened, we noticed a while later that no logging was taking place, and I determined that the service account didn't have sufficient permissions, so I made the necessary changes to give that service account access to the logging database. I first tried just the db_datawriter role and that wasn't enough, so I granted the service account membership in the db_owner role. Yes, that's a much bigger set of permissions, but I didn't want to search out the specific permissions at the time. Once I determined that the service account had the appropriate permissions, I wanted to get query logging restarted from SSAS, and I wondered how to do that? Having just used a larger hammer than necessary with the db_owner role membership, I considered just restarting SSAS to get it logging again. However, this was a production server, and it was in the middle of business hours, and there were active users connecting to that SSAS instance, so I thought better of it.As I considered the options, I remembered that the first time I set up query logging, by putting in a valid connection string to the QueryLogConnectionString server property, logging started immediately after I saved the properties. I wondered if I could make some other change to the connection string so that the query logging would start again without restarting the service. I went into the connection string dialog, went to the All page, and looked at the properties I could change that wouldn't affect the actual connection. Aha! The Application Name property would do just nicely - I set it to "SSAS Query Logging" (it was previously blank) and saved the changes to the server properties. And the query logging started up right away. If I need to get this running again in the future, I could just make a small change in the Application Name property again, save it, and even change it back again if I wanted to.The other nice side effect of setting the Application Name property is that now I can see (and possibly filter for or filter out) the SQL activity in that database that is related to the query logging process in Profiler:  To sum up:The SSAS Query Logging process will automatically delete rows from the QueryLog table, so if you want to keep them longer, put a trigger on the table to copy the rows to another tableThe SSAS service account requires more than db_datawriter role membership (and probably less than db_owner) in the database specified in the QueryLogConnectionString server property to successfully insert log rows to the QueryLog  table.Query logging will stop quietly whenever it encounters an error. Make a change to the QueryLogConnectionString server property (such as the Application Name attribute) to get query logging to restart and you won't have to restart the service.

    Read the article

  • Simple task framework - building software from reusable pieces

    - by RuslanD
    I'm writing a web service with several APIs, and they will be sharing some of the implementation code. In order not to copy-paste, I would like to ideally implement each API call as a series of tasks, which are executed in a sequence determined by the business logic. One obvious question is whether that's the best strategy for code reuse, or whether I can look at it in a different way. But assuming I want to go with tasks, several issues arise: What's a good task interface to use? How do I pass data computed in one task to another task in the sequence that might need it? In the past, I've worked with task interfaces like: interface Task<T, U> { U execute(T input); } Then I also had sort of a "task context" object which had getters and setters for any kind of data my tasks needed to produce or consume, and it gets passed to all tasks. I'm aware that this suffers from a host of problems. So I wanted to figure out a better way to implement it this time around. My current idea is to have a TaskContext object which is a type-safe heterogeneous container (as described in Effective Java). Each task can ask for an item from this container (task input), or add an item to the container (task output). That way, tasks don't need to know about each other directly, and I don't have to write a class with dozens of methods for each data item. There are, however, several drawbacks: Each item in this TaskContext container should be a complex type that wraps around the actual item data. If task A uses a String for some purpose, and task B uses a String for something entirely different, then just storing a mapping between String.class and some object doesn't work for both tasks. The other reason is that I can't use that kind of container for generic collections directly, so they need to be wrapped in another object. This means that, based on how many tasks I define, I would need to also define a number of classes for the task items that may be consumed or produced, which may lead to code bloat and duplication. For instance, if a task takes some Long value as input and produces another Long value as output, I would have to have two classes that simply wrap around a Long, which IMO can spiral out of control pretty quickly as the codebase evolves. I briefly looked at workflow engine libraries, but they kind of seem like a heavy hammer for this particular nail. How would you go about writing a simple task framework with the following requirements: Tasks should be as self-contained as possible, so they can be composed in different ways to create different workflows. That being said, some tasks may perform expensive computations that are prerequisites for other tasks. We want to have a way of storing the results of intermediate computations done by tasks so that other tasks can use those results for free. The task framework should be light, i.e. growing the code doesn't involve introducing many new types just to plug into the framework.

    Read the article

  • Script to UPDATE STATISTICS with time window

    - by Bill Graziano
    I recently spent some time troubleshooting odd query plans and came to the conclusion that we needed better statistics.  We’ve been running sp_updatestats but apparently it wasn’t sampling enough of the table to get us what we needed.  I have a pretty limited window at night where I can hammer the disks while this runs.  The script below just calls UPDATE STATITICS on all tables that “need” updating.  It defines need as any table whose statistics are older than the number of days you specify (30 by default).  It also has a throttle so it breaks out of the loop after a set amount of time (60 minutes).  That means it won’t start processing a new table after this time but it might take longer than this to finish what it’s doing.  It always processes the oldest statistics first so it will eventually get to all of them.  It defaults to sample 25% of the table.  I’m not sure that’s a good default but it works for now.  I’ve tested this in SQL Server 2005 and SQL Server 2008.  I liked the way Michelle parameterized her re-index script and I took the same approach. CREATE PROCEDURE dbo.UpdateStatistics ( @timeLimit smallint = 60 ,@debug bit = 0 ,@executeSQL bit = 1 ,@samplePercent tinyint = 25 ,@printSQL bit = 1 ,@minDays tinyint = 30 )AS/******************************************************************* Copyright Bill Graziano 2010*******************************************************************/SET NOCOUNT ON;PRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + 'Launching...'IF OBJECT_ID('tempdb..#status') IS NOT NULL DROP TABLE #status;CREATE TABLE #status( databaseID INT , databaseName NVARCHAR(128) , objectID INT , page_count INT , schemaName NVARCHAR(128) Null , objectName NVARCHAR(128) Null , lastUpdateDate DATETIME , scanDate DATETIME CONSTRAINT PK_status_tmp PRIMARY KEY CLUSTERED(databaseID, objectID));DECLARE @SQL NVARCHAR(MAX);DECLARE @dbName nvarchar(128);DECLARE @databaseID INT;DECLARE @objectID INT;DECLARE @schemaName NVARCHAR(128);DECLARE @objectName NVARCHAR(128);DECLARE @lastUpdateDate DATETIME;DECLARE @startTime DATETIME;SELECT @startTime = GETDATE();DECLARE cDB CURSORREAD_ONLYFOR select [name] from master.sys.databases where database_id > 4OPEN cDBFETCH NEXT FROM cDB INTO @dbNameWHILE (@@fetch_status <> -1)BEGIN IF (@@fetch_status <> -2) BEGIN SELECT @SQL = ' use ' + QUOTENAME(@dbName) + ' select DB_ID() as databaseID , DB_NAME() as databaseName ,t.object_id ,sum(used_page_count) as page_count ,s.[name] as schemaName ,t.[name] AS objectName , COALESCE(d.stats_date, ''1900-01-01'') , GETDATE() as scanDate from sys.dm_db_partition_stats ps join sys.tables t on t.object_id = ps.object_id join sys.schemas s on s.schema_id = t.schema_id join ( SELECT object_id, MIN(stats_date) as stats_date FROM ( select object_id, stats_date(object_id, stats_id) as stats_date from sys.stats) as d GROUP BY object_id ) as d ON d.object_id = t.object_id where ps.row_count > 0 group by s.[name], t.[name], t.object_id, COALESCE(d.stats_date, ''1900-01-01'') ' SET ANSI_WARNINGS OFF; Insert #status EXEC ( @SQL); SET ANSI_WARNINGS ON; END FETCH NEXT FROM cDB INTO @dbNameENDCLOSE cDBDEALLOCATE cDBDECLARE cStats CURSORREAD_ONLYFOR SELECT databaseID , databaseName , objectID , schemaName , objectName , lastUpdateDate FROM #status WHERE DATEDIFF(dd, lastUpdateDate, GETDATE()) >= @minDays ORDER BY lastUpdateDate ASC, page_count desc, [objectName] ASC OPEN cStatsFETCH NEXT FROM cStats INTO @databaseID, @dbName, @objectID, @schemaName, @objectName, @lastUpdateDateWHILE (@@fetch_status <> -1)BEGIN IF (@@fetch_status <> -2) BEGIN IF DATEDIFF(mi, @startTime, GETDATE()) > @timeLimit BEGIN PRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + '*** Time Limit Reached ***'; GOTO __DONE; END SELECT @SQL = 'UPDATE STATISTICS ' + QUOTENAME(@dBName) + '.' + QUOTENAME(@schemaName) + '.' + QUOTENAME(@ObjectName) + ' WITH SAMPLE ' + CAST(@samplePercent AS NVARCHAR(100)) + ' PERCENT;'; IF @printSQL = 1 PRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + @SQL + ' (Last Updated: ' + CAST(@lastUpdateDate AS VARCHAR(100)) + ')' IF @executeSQL = 1 BEGIN EXEC (@SQL); END END FETCH NEXT FROM cStats INTO @databaseID, @dbName, @objectID, @schemaName, @objectName, @lastUpdateDateEND__DONE:CLOSE cStatsDEALLOCATE cStatsPRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + 'Completed.'GO

    Read the article

  • Hell and Diplomacy: Notes on Software Integration

    - by ericajanine
    Well, I'm getting cabin fever and short-timer's ADD all at the same time. I haven't been anywhere outside of my greater city area in FOREVER and I'm only days away from my vacation. I have brainlock because the last few days have been non-stop diffusing amazingly hostile conversations. I think I'll write about that. So then, I "do" software. At the end of the day, software is pretty straightforward. Software is that thing we love and try to make do things not currently in play, in existence. If a process around getting software to do something is broken (like most actually are), then we should acknowledge it and move on. We are professional. We are helpful beyond the normal call of duty. We live and breathe making the lives better for those apps being active in the world. But above all--the shocker: We are SERVICE. In a service frame of mind, all perspectives shift to what is best overall for system stabilization vs. what must be in production to meet business objectives. It doesn't matter how much you like or dislike the creator of said software. It doesn't matter what time you went to bed last night or if your mate appreciates your Death March attitude. Getting a product in and when is an age-old dilemma in a software environment where more than, say, 3 people are involved. We know this. Taking a servant's perspective eliminates the drama surrounding what a group of half-baked developers forgot to tell each other in the 11th hour about their trampling changes before check-in. We, my counterparts in society, get paid to deal with that drama. I get paid to diffuse that drama and make everything integrate as smoothly as possible. At the end of the day, attacking someone over a minor detail not only makes things worse, it's against the whole point of our real existence. Being in support or software integration means you are to keep your eyes on the end game. That end game? It's making a solution work for all stakeholders, not just you or your immediate superior. Development and technology groups exist because business groups need them to exist and solve their issues. The end game? Doing what is best for those business groups ultimately. Period. Note: That does not mean you let your business users solely dictate when and if something gets changed in an environment you ultimately own. That's just crazy. Software and its environments are legitimately owned by those who manage it directly, no matter how important a business group believes it is to the existence of mankind. So, you both negotiate the terms of changing that environment and only do so upon that negotiation. Diplomacy is in order. So, to finish my thoughts: If you have no ability to keep your mouth shut in a situation where a business or development group truly need your help to make something work even beyond a deadline, find another profession. Beating up someone verbally because they screw up means a service attitude is not at the forefront of your motivation for doing what is ultimately their work and their product. Software, especially integration, requires a strong will and a soft touch to keep it on track. Not a hammer covered in broken glass.

    Read the article

  • Experimenting with other search engines

    - by Bill Graziano
    I’ve been a Google user so long I can hardly remember what I used before it.  Alta Vista maybe?  Or Yahoo.  I’ve tried Bing off and on but it never really stuck.  I probably care more about search engines than your average user because of their impact on SQLTeam.com.  Lately I’ve been trying two other search engines and actually switched to one of them. I’ve played with Blekko a little in the past.  They have some interesting ways to “slice up” your results.  For example, searching on “SQL Server /blogs /date” should just search all the recently updated blogs.  Those two extra words on the search are slashtags.  The full list of slashtags runs from /forums to just see forums to /twitter to /nikon to /reviews and on and on and on.  I laughed when I saw they had slashtags for both liberal and conservative.  I’d hate to find any search results that don’t match my existing worldview :)  You can also create your own slashtags.  I created a mini-search engine for the SQL Server blogs that I read.  You can search it for “backup” at http://blekko.com/ws/backup+/billgraziano/sql-sites.  I uploaded my OPML and it limited the search to just those sites.  It seems like the site is focusing more on curating results and less on algorithms.  This is an interesting site for those power searchers.  There are some great ways to curate results using slashtags.  For 99% of my searches (type words, click on one of the first few links) slashtags are overkill.  They do have some good information on page and site ranking though so I’ll probably send some time looking through that. Blekko recently got my attention again when they said they were banning “content farms” - and that includes eHow and experts-exchange.  I always feel used when I click on a link to EE and find myself scrolling all the way to the bottom to see if I can find the answer.  Sometimes it’s there but sometimes it tells me I need to pay first.  I’ve longed for a way to always exclude certain sites.  Blekko might be taking a hammer to a problem that needs a scalpel but it’s an interesting choice.  (And some of the comments in the TechCrunch link are interesting if you’re a search nerd.) DuckDuckGo is an odd name for a search engine.  Their big hook is that they don’t have search history.  If you wade through your Google account you can probably find the page where it stores your search history.  It was pretty enlightening to find mine.  It was easy to disable but that got me started looking at other search engines.  DDG (or DukGo) just feels like Google used to in the old days.  The results are good enough and the site is fast. Searches will return a snippet from WikiPedia or other site (like StackOverflow) at the top.  I think the idea is to answer the question without needing to visit the site.  I’m not sure that’s a good thing for SQLTeam.com. The only thing I really miss is image search.  You can add a “!i” at the end of any search and it will search the images on Bing.  Bing doesn’t have a great image search but it works for most of what I need.  They call these exclamation marks “!bangs” and they are kinda, sorta like slashtags.  I’ve been using DuckDuckGo now for a few weeks and I’m pretty happy with it.  I use Chrome for my browser and it was an easy switch to make.  It’s still a little surprising seeing my search results come up in a different format.  I’m starting to get used to it though.

    Read the article

  • Approach for packing 2D shapes while minimizing total enclosing area

    - by Dennis
    Not sure on my tags for this question, but in short .... I need to solve a problem of packing industrial parts into crates while minimizing total containing area. These parts are motors, or pumps, or custom-made components, and they have quite unusual shapes. For some, it may be possible to assume that a part === rectangular cuboid, but some are not so simple, i.e. they assume a shape more of that of a hammer or letter T. With those, (assuming 2D shape), by alternating direction of top & bottom, one can pack more objects into the same space, than if all tops were in the same direction. Crude example below with letter "T"-shaped parts: ***** xxxxx ***** x ***** *** ooo * x vs * x vs * x vs * x o * x * xxxxx * x * x o xxxxx xxx Right now we are solving the problem by something like this: using CAD software, make actual models of how things fit in crate boxes make estimates of actual crate dimensions & write them into Excel file (1) is crazy amount of work and as the result we have just a limited amount of possible entries in (2), the Excel file. The good things is that programming this is relatively easy. Given a combination of products to go into crates, we do a lookup, and if entry exists in the Excel (or Database), we bring it out. If it doesn't, we say "sorry, no data!". I don't necessarily want to go full force on making up some crazy algorithm that given geometrical part description can align, rotate, and figure out best part packing into a crate, given its shape, but maybe I do.. Question Well, here is my question: assuming that I can represent my parts as 2D (to be determined how), and that some parts look like letter T, and some parts look like rectangles, which algorithm can I use to give me a good estimate on the dimensions of the encompassing area, while ensuring that the parts are packed in a minimal possible area, to minimize crating/shipping costs? Are there approximation algorithms? Seeing how this can get complex, is there an existing library I could use? My thought / Approach My naive approach would be to define a way to describe position of parts, and place the first part, compute total enclosing area & dimensions. Then place 2nd part in 0 degree orientation, repeat, place it at 180 degree orientation, repeat (for my case I don't think 90 degree rotations will be meaningful due to long lengths of parts). Proceed using brute force "tacking on" other parts to the enclosing area until all parts are processed. I may have to shift some parts a tad (see 3rd pictorial example above with letters T). This adds a layer of 2D complexity rather than 1D. I am not sure how to approach this. One idea I have is genetic algorithms, but I think those will take up too much processing power and time. I will need to look out for shape collisions, as well as adding extra padding space, since we are talking about real parts with irregularities rather than perfect imaginary blocks. I'm afraid this can get geometrically messy fairly fast, and I'd rather keep things simple, if I can. But what if the best (practical) solution is to pack things into different crate boxes rather than just one? This can get a bit more tricky. There is human element involved as well, i.e. like parts can go into same box and are thus a constraint to be considered. Some parts that are not the same are sometimes grouped together for shipping and can be considered as a common grouped item. Sometimes customers want things shipped their way, which adds human element to constraints. so there will have to be some customization.

    Read the article

  • Improving long-polling Ajax performance

    - by Bears will eat you
    I'm writing a webapp (Firefox-compatible only) which uses long polling (via jQuery's ajax abilities) to send more-or-less constant updates from the server to the client. I'm concerned about the effects of leaving this running for long periods of time, say, all day or overnight. The basic code skeleton is this: function processResults(xml) { // do stuff with the xml from the server } function fetch() { setTimeout(function () { $.ajax({ type: 'GET', url: 'foo/bar/baz', dataType: 'xml', success: function (xml) { processResults(xml); fetch(); }, error: function (xhr, type, exception) { if (xhr.status === 0) { console.log('XMLHttpRequest cancelled'); } else { console.debug(xhr); fetch(); } } }); }, 500); } (The half-second "sleep" is so that the client doesn't hammer the server if the updates are coming back to the client quickly - which they usually are.) After leaving this running overnight, it tends to make Firefox crawl. I'd been thinking that this could be partially caused by a large stack depth since I've basically written an infinitely recursive function. However, if I use Firebug and throw a breakpoint into fetch, it looks like this is not the case. The stack that Firebug shows me is only about 4 or 5 frames deep, even after an hour. One of the solutions I'm considering is changing my recursive function to an iterative one, but I can't figure out how I would insert the delay in between Ajax requests without spinning. I've looked at the JS 1.7 "yield" keyword but I can't quite wrap my head around it, to figure out if it's what I need here. Is the best solution just to do a hard refresh on the page periodically, say, once every hour? Is there a better/leaner long-polling design pattern that won't put a hurt on the browser even after running for 8 or 12 hours? Or should I just skip the long polling altogether and use a different "constant update" pattern since I usually know how frequently the server will have a response for me?

    Read the article

  • Anomalous results getting color components of some UIColors

    - by hkatz
    I'm trying to extract the rgb components of a UIColor in order to hand-build the pixels in a CGBitmapContext. The following sample code works fine for most of the UIColor constants but, confusingly, not all. To wit: CGColorRef color = [[UIColor yellowColor] CGColor]; const float* rgba = CGColorGetComponents(color); float r = rgba[0]; float g = rgba[1]; float b = rgba[2]; float a = rgba[3]; NSLog( @"r=%f g=%f b=%f a=%f", r, g, b, a); The results for [UIColor yellowColor] above are r=1.000000 g=1.000000 b=0.000000 a=1.000000, as expected. [UIColor redColor] gives r=1.000000 g=0.000000 b=0.000000 a=1.000000, again as expected. Similarly for blueColor and greenColor. However, the results for [UIColor blackColor] and [UIColor whiteColor] seem completely anomalous, and I don't know what I'm doing wrong (if indeed I am). To wit, [UIColor blackColor] gives r=0.000000 g=1.000000 b=0.000000 a=0.000000, which is a tranparent green, and [UIColor whiteColor] gives r=1.000000 g=1.000000 b=0.000000 a=0.000000, which is a transparent yellow. I'd appreciate it if somebody could either: (1) explain what I'm doing wrong (2) replicate my anomalous results and tell me it's not me, or (3) hit me over the head with a big hammer so it stops hurting so much. Howard

    Read the article

  • Global.asax parser errors when deploying MVC 1 application to remote server.

    - by mannish
    So we're having some issues deploying an ASP.NET MVC app to a client site. Basically when we try to test the app from localhost, we get the dreaded Global.asax parser error indicating it could not load the application global. Research indicates there are basically 4 possible reasons for this exception we're seeing: The solution hasn't been built. This clearly isn't the case since we can deploy it here and it runs fine on any machine we deploy to AND we had to build and publish the darn thing to deploy it anyway. The Global.asax namespace inheritance does not match the application global code file. Again we double checked this and since it runs just fine here that can't be the issue. Miscellaneous non-descript IIS/VS.NET mischief. Basically something get's wonky in IIS or VS.NET and the web server won't behave correctly for this application. We've done cleans and rebuilds, we've deleted virtual dir and recreated, and performed all of the IIS munging that we've found elsewhere online. Various combinations of IIS bounces, server reboots, virtual dir/application recreation, etc. Code level permissions issue. We've verified full trust in machine/web config in the framework directory, we've set .NET trust to full in IIS, we've granted Everyone full control on the directories just to hit it with the security hammer, etc. etc. The pertinent detials: Windows Server 2008 x64 IIS 7, 32 bit compatible app pool (app was written on 32 bit OS compiled for any cpu) App pool identity set to NetworkService Microsoft ASP.NET MVC 1.0 XCopy deployment We deployed another read-only app just fine. The significant difference in this app is the use of NHibernate and Log4Net which require full trust. Additionally, the actual project name of the web project differs from the default namespace however the Inherits namespace in Global.asax and the Global.asax.cs files match so this shouldn't be an issue. Anybody have any bright ideas? We're officially down to just the dim ones.

    Read the article

  • Associate "Code/Properties/Stuff" with Fields in C# without reflection. I am too indoctrinated by J

    - by AlexH
    I am building a library to automatically create forms for Objects in the project that I am working on. The codebase is in C#, and essentially we have a HUGE number of different objects to store information about different things. If I send these objects to the client side as JSON, it is easy enough to programatically inspect them to generate a form for all of the properties. The problem is that I want to be able to create a simple way of enforcing permissions and doing validation on the client side. It needs to be done on a field by field level. In javascript I would do this by creating a parallel object structure, which had some sort of { permissions : "someLevel", validator : someFunction } object at the nodes. With empty nodes implying free permissions and universal validation. This would let me simply iterate over the new object and the permissions object, run the check, and deal with the result. Because I am overfamilar with the hammer that is javascript, this is really the only way that I can see to deal with this problem. My first implementation thus uses reflection to let me treat objects as dictionaries, that can be programatically iterated over, and then I just have dictionaries of dictionaries of PermissionRule objects which can be compared with. Very javascripty. Very awkward. Is there some better way that I can do this? Essentially a way to associate a data set with each property, and then iterate over those properties. Or else am I Doing It Wrong?

    Read the article

  • Database Functional Programming in Clojure

    - by Ralph
    "It is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail." - Abraham Maslow I need to write a tool to dump a large hierarchical (SQL) database to XML. The hierarchy consists of a Person table with subsidiary Address, Phone, etc. tables. I have to dump thousands of rows, so I would like to do so incrementally and not keep the whole XML file in memory. I would like to isolate non-pure function code to a small portion of the application. I am thinking that this might be a good opportunity to explore FP and concurrency in Clojure. I can also show the benefits of immutable data and multi-core utilization to my skeptical co-workers. I'm not sure how the overall architecture of the application should be. I am thinking that I can use an impure function to retrieve the database rows and return a lazy sequence that can then be processed by a pure function that returns an XML fragment. For each Person row, I can create a Future and have several processed in parallel (the output order does not matter). As each Person is processed, the task will retrieve the appropriate rows from the Address, Phone, etc. tables and generate the nested XML. I can use a a generic function to process most of the tables, relying on database meta-data to get the column information, with special functions for the few tables that need custom processing. These functions could be listed in a map(table name -> function). Am I going about this in the right way? I can easily fall back to doing it in OO using Java, but that would be no fun. BTW, are there any good books on FP patterns or architecture? I have several good books on Clojure, Scala, and F#, but although each covers the language well, none look at the "big picture" of function programming design.

    Read the article

  • Obtaining command line arguments in a QT application

    - by morpheous
    The following snippet is from a little app I wrote using the QT framework. The idea is that the app can be run in batch mode (i.e. called by a script) or can be run interactively. It is important therefore, that I am able to parse command line arguments in order to know which mode in which to run etc. [Edit] I am debugging using QTCreator 1.3.1 on Ubuntu Karmic. The arguments are passed in the normal way (i.e. by adding them via the 'Project' settings in the QTCreator IDE). When I run the app, it appears that the arguments are not being passed to the application. The code below, is a snippet of my main() function. int main(int argc, char *argv[]) { //Q_INIT_RESOURCE(application); try { QApplication the_app(argc, argv); //trying to get the arguments into a list QStringList cmdline_args = QCoreApplication::arguments(); // Code continues ... } catch (const MyCustomException &e) { return 1; } return 0; } [Update] I have identified the problem - for some reason, although argc is correct, the elements of argv are empty strings. I put this little code snippet to print out the argv items - and was horrified to see that they were all empty. for (int i=0; i< argc; i++){ std::string s(argv[i]); //required so I can see the damn variable in the debugger std::cout << s << std::endl; } Does anyone know what on earth is going on (or a hammer)?

    Read the article

  • How to understand other people's CSS architectures?

    - by John
    I am reasonably good with CSS. However, when working with someone else's CSS, it's difficult for me to see the "bigger picture" in their architecture (but i have no problem when working with a CSS sheet I wrote myself). For example, I have no problems using Firebug to isolate and fix cross browser compatibility issues, or fixing a floating issue, or changing the height on a particular element. But if I'm asked to do something drastic such as, "I want the right sidebars of pages A, B, C and D to have a red border. I want the right side bars of pages E, F and G to have a blue border if and only if the user mouses over", then it takes me time a long time to map out all the CSS inheritance rules to see the "bigger picture". For some reason, I don't encounter the same difficulty with backend code. After a quick debriefing of how a feature works, and a quick inspection of the controller and model code, I will feel comfortable with the architecture. I will think, "it's reasonable to assume that there will be an Employee class that inherits from the Person Class that's used by a Department controller". If I discover inconvenient details that aren't consistent with overall architectural style, I am confident that I can hammer things back in place. With someone else's CSS work, it's much harder for me to see the "relationships" between different classes, and when and how the classes are used. When there are many inheritance rules, I feel overwhelmed. I'm having trouble articulating my question and issues... All I want to know is, why is it so much harder for me to see the bigger picture in someone else's CSS architecture than compared to someone else's business logic layer? **Does it have any thing to do with CSS being a relatively new technology, and there aren't many popular design patterns?

    Read the article

  • Trouble binding command in grid menu item.

    - by Pete
    I have a grid that's inside a usercontrol derived class called MediatedUserControl. I'm adding a context menu to let the user delete an item, but I've been unable to figure out how to bind the command to my command property. I'm using MVVM and my viewmodel implements a public ICommand property called DeleteSelectedItemCommand. However, when the view is displayed, I get the following message in the output window: System.Windows.Data Error: 4 : Cannot find source for binding with reference 'RelativeSource FindAncestor, AncestorType='BRO.View.MediatedUserControl', AncestorLevel='1''. BindingExpression:Path=DataContext.DeleteSelectedItemCommand; DataItem=null; target element is 'BarButtonItem' (HashCode=6860584); target property is 'Command' (type 'ICommand') I feel like I generally have a good handle on bindings like this and can't figure out what it is I'm missing here. Thanks for any help you can provide. <dxg:GridControl HorizontalAlignment="Left" Margin="12,88,0,0" x:Name="gridControl1" VerticalAlignment="Top" Height="500" Width="517" DataSource="{Binding ItemList}" BorderBrush="{StaticResource {x:Static SystemColors.ActiveBorderBrushKey}}" ShowBorder="True" Background="{StaticResource {x:Static SystemColors.ControlLightBrushKey}}" UseLayoutRounding="False" DataContext="{Binding}"> <dxg:GridControl.Columns> <dxg:GridColumn FieldName="Code" Header="Code" Width="107" /> <dxg:GridColumn FieldName="Name" Header="Item" Width="173" /> <dxg:GridColumn FieldName="PricePerItem" Header="Unit Price" Width="70"> <dxg:GridColumn.EditSettings> <dxe:TextEditSettings DisplayFormat="N2" /> </dxg:GridColumn.EditSettings> </dxg:GridColumn> <dxg:GridColumn FieldName="Quantity" Header="Qty" Width="50" AllowEditing="True" /> <dxg:GridColumn FieldName="TotalPrice" Header="Total Price" Width="90"> <dxg:GridColumn.EditSettings> <dxe:TextEditSettings DisplayFormat="N2" /> </dxg:GridColumn.EditSettings> </dxg:GridColumn> </dxg:GridControl.Columns> <dxg:GridControl.View> <dxg:TableView ShowIndicator="False" ShowGroupPanel="False" MultiSelectMode="Row" AllowColumnFiltering="False" AllowBestFit="False" AllowFilterEditor="False" AllowEditing="False" AllowGrouping="False" AllowSorting="False" AllowResizing="False" AllowMoving="False" AllowMoveColumnToDropArea="False" AllowDateTimeGroupIntervalMenu="False" > <dxg:TableView.RowCellMenuCustomizations> <dxb:BarButtonItem Name="deleteRowItem" Command="{Binding RelativeSource={RelativeSource FindAncestor, AncestorType=view:MediatedUserControl, AncestorLevel=1}, Path=DataContext.DeleteSelectedItemCommand}"> </dxb:BarButtonItem> </dxg:TableView.RowCellMenuCustomizations> </dxg:TableView> </dxg:GridControl.View>

    Read the article

  • New Computer Build Questions

    - by MJ
    I'm in the process of gathering parts and specs for a new machine. I wear many hats, so the machine needs to do a lot. I need at least 2 monitor support, if not three. I also play many online MMOs (wow, aion, war hammer, etc), along with some freelance programming projects. I already have a case which is very large, so it will fit anything. I have 2 other SATA HDs. They are more for storage and basic programs. I feel that the best improvement could be done with a solid state HD, true or not? I'm more of a software/programming guy, so ANY input at all on improving this system build would be appreciated. I have a few questions with this list. AMD or Intel? I don't know enough about either to choose what would best fit me. Thanks! **EDIT: Thanks for the input everyone! Here are some answers: I do a lot of programming and gaming, so I do need things for both. The newer video card covers the gaming aspect, as well as allowing me to have many monitors. (hopefully upgrade to dual 30' or more) I don't need any additional HDs at this time. I have a SATA 160g and 120g from my previous computer, and a NAS system with over 2TB of storage on the homenetwork. I just wanted a fast HD for OS/programs/games. With the memory. I have used G.SKILL before in 2 system builds. It's done excellent for me in them. Very stable. **EDIT2: Made some additional changes. Lowered the power supply down to 750, which saves me more $$. Also changed the SSD to 2 WD 650G HDs. Thinking of doing a CPU upgrade to the 3.4GHZ AMD Phenom II X4 965 Black Edition Deneb 3.4GHz System Specs - Budget:$1500 CPU: AMD Phenom II X4 955 Black Edition Deneb 3.2GHz MB: GIGABYTE GA-MA790GPT-UD3H AM3 AMD 790GX HDMI ATX Memory: G.SKILL 4GB (2 x 2GB) 240-Pin DDR3 SDRAM DDR3 1333 (PC3 1066 Video: DIAMOND 5870PE51G Radeon HD 5870 (Cypress XT) 1GB 256-bit GD Power Supply: XCLIO GREATPOWER 1000W ATX12V SLI Ready CrossFire Ready HD:Intel X25-M Mainstream SSDSA2MH080G2C1 2.5" 80GB SATA II MLC Changes: Power Supply: CORSAIR CMPSU-750TX 750W ATX12V / EPS12V HD: 2x Western Digital Caviar Blue WD6400AAKS 640GB CPU: AMD Phenom II X4 965 Black Edition Deneb 3.4GHz

    Read the article

  • Reality behind wireless security - the weakness of encrypting

    - by Cawas
    I welcome better key-wording here, both on tags and title, and I'll add more links as soon as possible. For some years I'm trying to conceive a wireless environment that I'd setup anywhere and advise for everyone, including from big enterprises to small home networks of 1 machine. I've always had the feeling using any kind of the so called "wireless security" methods is actually a bad design. I'm talking mostly about encrypting and pass-phrasing (which are actually two different concepts), since I won't even considering hiding SSID and mac filtering. I understand it's a natural way of thinking. With cable networking nobody can access the network unless they have access to the physical cable, so you're "secure" in the physical way. In a way, encrypting is for wireless what walling (building walls) is for the cables. And giving pass-phrases is adding a door with a key. But the cabling without encryption is also insecure. Someone just need to plugin and get your data! And while I can see the use for encrypting data, I don't think it's a security measure in wireless networks. As I said elsewhere, I believe we should encrypt only sensitive data regardless of wires. And passwords should be added to the users, always, not to wifi. For securing files, truly, best solution is backup. Sure all that doesn't happen that often, but I won't consider the most situations where people just don't care. I think there are enough situations where people actually care on using passwords on their OS users, so let's go with that in mind. For being able to break the walls or the door someone will need proper equipment such as a hammer or a master key of some kind. Same is true for breaking the wireless walls in the analogy. But, I'd say true data security is at another place. I keep promoting the Fonera concept as an instance. It opens up a free wifi port, if you choose so, and anyone can connect to the internet through that, without having any access to your LAN. It also uses a QoS which will never let your bandwidth drop from that public usage. That's security, and it's open. And who doesn't want to be able to use internet freely anywhere you can find wifi spots? I have 3G myself, but that's beyond the point here. If I have a wifi at home I want to let people freely use it for internet as to not be an hypocrite and even guests can easily access my files, just for reading access, so I don't need to keep setting up encryption and pass-phrases that are not whole compatible. I'll probably be bashed for promoting the non-usage of WPA 2 with AES or whatever, but I wanted to know from more experienced (super) users out there: what do you think? Is there really a need for encryption to have true wireless security?

    Read the article

  • General guidelines / workflow to convert or transfer video "professionally"?

    - by cloneman
    I'm an IT "professional" who sometimes has to deal with small video conversion / video cutting projects, and I'd like to learn "the right way" to do this. Every time I search Google, there's always a disaster for weird, low-maturity trialware, or random forums threads from 3-4 years ago indicating various antiquated method to do it. The big question is the following: What are the "general" guidelines and tools to transcode video into some efficient (lossless?) intermediary, for editing purposes, for the purpose of eventually re-encoding it after? It seems to me like even the simplest of formats and tasks are a disaster of endless trial & error, or expertise only known by hardened experts who have a swiss army kife of weird conversion tools that they use, almost as if mounting an attack against the project. Here are a few cases in point: Simple VOB files extracted from DVD footage can't be imported into Adobe Premiere directly. Virtualdub is an old software people keep recommending but doesn't seem to support newer formats. I don't even know how to tell with certainty which codecs a video has, and weather the image is interlaced or not, and what resolution and codecs I'm dealing with. Problems: Choosing a wrong interlace option which diminishes quality Choosing a wrong pixel aspect ratio (stretches the image) Choosing a wrong "project type" in Premiere causing footage to require scaling Being forced to use some weird program that will have any number of negative effects What I'm looking for: Books or "Real knowledge" on format conversions, recognized tools, etc. that aren't some random forum guides on how to deal with video formats. Workflow guidelines on identifying a format going from one format to another without problems as mentioned above. Documentation on what programs like Adobe Premiere can and can't do with regards to formats, so that I don't use a wrench as a hammer. TL;DR How should you convert or "prepare" a video file to ensure it will be supported by Premiere for editing? Is premiere a suitable program to handle cropping, encoding, or should other tools be used for this, when making a video montage from a variety of source formats? What are some good books to read that specifically deal with converting videos that use any number of codecs?

    Read the article

< Previous Page | 1 2 3 4 5  | Next Page >