Search Results

Search found 35363 results on 1415 pages for 'long click'.

Page 499/1415 | < Previous Page | 495 496 497 498 499 500 501 502 503 504 505 506  | Next Page >

  • Working with packed dates in SSIS

    - by Jim Giercyk
    One of the challenges recently thrown my way was to read an EBCDIC flat file, decode packed dates, and insert the dates into a SQL table.  For those unfamiliar with packed data, it is a way to store data at the nibble level (half a byte), and was often used by mainframe programmers to conserve storage space.  In the case of my input file, the dates were 2 bytes long and  represented the number of days that have past since 01/01/1950.  My first thought was, in the words of Scooby, Hmmmmph?  But, I love a good challenge, so I dove in. Reading in the flat file was rather simple.  The only difference between reading an EBCDIC and an ASCII file is the Code Page option in the connection manager.  In my case, I needed to use Code Page 1140 for EBCDIC (I could have also used Code Page 37).       Once the code page is set correctly, SSIS can understand what it is reading and it will convert the output to the default code page, 1252.  However, packed data is either unreadable or produces non-alphabetic characters, as we can see in the preview window.   Column 1 is actually the packed date, columns 0 and 2 are the values in the rest of the file.  We are only interested in Column 1, which is a 2 byte field representing a packed date.  We know that 2 bytes of packed data can be stored in 1 byte of character data, so we are working with 4 packed digits in 2 character bytes.  If you are confused, stay tuned….this will make sense in a minute.   Right-click on your Flat File Source shape and select “Show Advanced Editor”. Here is where the magic begins. By changing the properties of the output columns, we can access the packed digits from each byte. By default, the Output Column data type is DT_STR. Since we want to look at the bytes individually and not the entire string, change the data type to DT_BYTES. Next, and most important, set UseBinaryFormat to TRUE. This will write the HEX VALUES of the output string instead of writing the character values.  Now we are getting somewhere! Next, you will need to use a Data Conversion shape in your Data Flow to transform the 2 position byte stream to a 4 position Unicode string containing the packed data.  You need the string to be 4 bytes long because it will contain the 4 packed digits.  Here is what that should look like in the Data Conversion shape: Direct the output of your data flow to a test table or file to see the results.  In my case, I created a test table.  The results looked like this:     Hold on a second!  That doesn't look like a date at all.  No, of course not.  It is a hex number which represents the days which have passed between 01/01/1950 and the date.  We have to convert the Hex value to a decimal value, and use the DATEADD function to get a date value.  Luckily, I have created a function to convert Hex to Decimal:   -- ============================================= -- Author:        Jim Giercyk -- Create date: March, 2012 -- Description:    Converts a Hex string to a decimal value -- ============================================= CREATE FUNCTION [dbo].[ftn_HexToDec] (     @hexValue NVARCHAR(6) ) RETURNS DECIMAL AS BEGIN     -- Declare the return variable here DECLARE @decValue DECIMAL IF @hexValue LIKE '0x%' SET @hexValue = SUBSTRING(@hexValue,3,4) DECLARE @decTab TABLE ( decPos1 VARCHAR(2), decPos2 VARCHAR(2), decPos3 VARCHAR(2), decPos4 VARCHAR(2) ) DECLARE @pos1 VARCHAR(1) = SUBSTRING(@hexValue,1,1) DECLARE @pos2 VARCHAR(1) = SUBSTRING(@hexValue,2,1) DECLARE @pos3 VARCHAR(1) = SUBSTRING(@hexValue,3,1) DECLARE @pos4 VARCHAR(1) = SUBSTRING(@hexValue,4,1) INSERT @decTab VALUES (CASE               WHEN @pos1 = 'A' THEN '10'                 WHEN @pos1 = 'B' THEN '11'               WHEN @pos1 = 'C' THEN '12'               WHEN @pos1 = 'D' THEN '13'               WHEN @pos1 = 'E' THEN '14'               WHEN @pos1 = 'F' THEN '15'               ELSE @pos1              END, CASE               WHEN @pos2 = 'A' THEN '10'                 WHEN @pos2 = 'B' THEN '11'               WHEN @pos2 = 'C' THEN '12'               WHEN @pos2 = 'D' THEN '13'               WHEN @pos2 = 'E' THEN '14'               WHEN @pos2 = 'F' THEN '15'               ELSE @pos2              END, CASE               WHEN @pos3 = 'A' THEN '10'                 WHEN @pos3 = 'B' THEN '11'               WHEN @pos3 = 'C' THEN '12'               WHEN @pos3 = 'D' THEN '13'               WHEN @pos3 = 'E' THEN '14'               WHEN @pos3 = 'F' THEN '15'               ELSE @pos3              END, CASE               WHEN @pos4 = 'A' THEN '10'                 WHEN @pos4 = 'B' THEN '11'               WHEN @pos4 = 'C' THEN '12'               WHEN @pos4 = 'D' THEN '13'               WHEN @pos4 = 'E' THEN '14'               WHEN @pos4 = 'F' THEN '15'               ELSE @pos4              END) SET @decValue = (CONVERT(INT,(SELECT decPos4 FROM @decTab)))         +                 (CONVERT(INT,(SELECT decPos3 FROM @decTab))*16)      +                 (CONVERT(INT,(SELECT decPos2 FROM @decTab))*(16*16)) +                 (CONVERT(INT,(SELECT decPos1 FROM @decTab))*(16*16*16))     RETURN @decValue END GO     Making use of the function, I found the decimal conversion, added that number of days to 01/01/1950 and FINALLY arrived at my “unpacked relative date”.  Here is the query I used to retrieve the formatted date, and the result set which was returned: SELECT [packedDate] AS 'Hex Value',        dbo.ftn_HexToDec([packedDate]) AS 'Decimal Value',        CONVERT(DATE,DATEADD(day,dbo.ftn_HexToDec([packedDate]),'01/01/1950'),101) AS 'Relative String Date'   FROM [dbo].[Output Table]         This technique can be used any time you need to retrieve the hex value of a character string in SSIS.  The date example may be a bit difficult to understand at first, but with SSIS becoming the preferred tool for enterprise level integration for many companies, there is no doubt that developers will encounter these types of requirements with regularity in the future. Please feel free to contact me if you have any questions.

    Read the article

  • Displaying Tweets in an ASP.NET UserControl with TweetSharp

    As I mentioned in my previous blog post, I'll be having a little fun over the next few weeks talking about all of the Social Media APIs I've been experimenting with. I thought I would start out today by talking about one of the most popular services, Twitter. Twitter is essentially a microblogging service that allows its users to post messages of up to 140 characters long known as Tweets. Users can subscribe to each other and see tweets from all of their subscriptions in their main feed listings. Due to this ease of providing and receiving updates to and from the masses, Twitter has been adopted by anything from bloggers to celebrities to large corporations. If you have a blog or just a website in general, having a Twitter account alongside it could potentially be a useful way to attract new visitors. Displaying your tweets on your website and ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Backbone.js, Rails and code duplication

    - by Matteo Pagliazzi
    I'm building a web app and I need a JS framework like Backbone.js to work with my backend rovided by Rails that mostly return JSON objects after DB queries. Searching on the web I've discovered Backbone which seems to be complete, quite populare and actively developed but I've noticed that a lot of things done by Backbone are simply a duplicte of the works done by Rails: for example validation and models. My idea of "perfect" (for my actual needs) JS mvc (it can't be called mvc but i don't have any other names) is something really simple that has a function for each action in my Rails controller that are triggered by a specific event (user/hash changes, click on a button...) and send requests to the server that respond with a JSON object then I'll load a template or execute some JS code. Do you have any concern/suggestion about my idea? Do you know some "micro" js framework like what i have described? If you have worked with backone.js + rails what can you suggest me?

    Read the article

  • SQL Table stored as a Heap - the dangers within

    - by MikeD
    Nearly all of the time I create a table, I include a primary key, and often that PK is implemented as a clustered index. Those two don't always have to go together, but in my world they almost always do. On a recent project, I was working on a data warehouse and a set of SSIS packages to import data from an OLTP database into my data warehouse. The data I was importing from the business database into the warehouse was mostly new rows, sometimes updates to existing rows, and sometimes deletes. I decided to use the MERGE statement to implement the insert, update or delete in the data warehouse, I found it quite performant to have a stored procedure that extracted all the new, updated, and deleted rows from the source database and dump it into a working table in my data warehouse, then run a stored proc in the warehouse that was the MERGE statement that took the rows from the working table and updated the real fact table. Use Warehouse CREATE TABLE Integration.MergePolicy (PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date, Operation varchar(5)) CREATE TABLE fact.Policy (PolicyKey int identity primary key, PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date) CREATE PROC Integration.MergePolicy as begin begin tran Merge fact.Policy as tgtUsing Integration.MergePolicy as SrcOn (tgt.PolicyId = Src.PolicyId) When not matched by Target then Insert (PolicyId, PolicyTypeKey, Premium, Deductible, EffectiveDate)values (src.PolicyId, src.PolicyTypeKey, src.Premium, src.Deductible, src.EffectiveDate) When matched and src.Operation = 'U' then Update set PolicyTypeKey = src.PolicyTypeKey,Premium = src.Premium,Deductible = src.Deductible,EffectiveDate = src.EffectiveDate When matched and src.Operation = 'D' then Delete ;delete from Integration.WorkPolicy commit end Notice that my worktable (Integration.MergePolicy) doesn't have any primary key or clustered index. I didn't think this would be a problem, since it was relatively small table and was empty after each time I ran the stored proc. For one of the work tables, during the initial loads of the warehouse, it was getting about 1.5 million rows inserted, processed, then deleted. Also, because of a bug in the extraction process, the same 1.5 million rows (plus a few hundred more each time) was getting inserted, processed, and deleted. This was being sone on a fairly hefty server that was otherwise unused, and no one was paying any attention to the time it was taking. This week I received a backup of this database and loaded it on my laptop to troubleshoot the problem, and of course it took a good ten minutes or more to run the process. However, what seemed strange to me was that after I fixed the problem and happened to run the merge sproc when the work table was completely empty, it still took almost ten minutes to complete. I immediately looked back at the MERGE statement to see if I had some sort of outer join that meant it would be scanning the target table (which had about 2 million rows in it), then turned on the execution plan output to see what was happening under the hood. Running the stored procedure again took a long time, and the plan output didn't show me much - 55% on the MERGE statement, and 45% on the DELETE statement, and table scans on the work table in both places. I was surprised at the relative cost of the DELETE statement, because there were really 0 rows to delete, but I was expecting to see the table scans. (I was beginning now to suspect that my problem was because the work table was being stored as a heap.) Then I turned on STATS_IO and ran the sproc again. The output was quite interesting.Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'Policy'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'MergePolicy'. Scan count 1, logical reads 433276, physical reads 60, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. I've reproduced the above from memory, the details aren't exact, but the essential bit was the very high number of logical reads on the table stored as a heap. Even just doing a SELECT Count(*) from Integration.MergePolicy incurred that sort of output, even though the result was always 0. I suppose I should research more on the allocation and deallocation of pages to tables stored as a heap, but I haven't, and my original assumption that a table stored as a heap with no rows would only need to read one page to answer any query was definitely proven wrong. It's likely that some sort of physical defragmentation of the table may have cleaned that up, but it seemed that the easiest answer was to put a clustered index on the table. After doing so, the execution plan showed a cluster index scan, and the IO stats showed only a single page read. (I aborted my first attempt at adding a clustered index on the table because it was taking too long - instead I ran TRUNCATE TABLE Integration.MergePolicy first and added the clustered index, both of which took very little time). I suspect I may not have noticed this if I had used TRUNCATE TABLE Integration.MergePolicy instead of DELETE FROM Integration.MergePolicy, since I'm guessing that the truncate operation does some rather quick releasing of pages allocated to the heap table. In the future, I will likely be much more careful to have a clustered index on every table I use, even the working tables. Mike  

    Read the article

  • Cannot Create a connection to Data Source VB 2010 [closed]

    - by CLO_471
    I seem to be having some issues with my Visual Basic 2010. I am trying to create a connection to a data source and it is just not working. Even my old connections in my other projects are not working. When I get into VB I try and create a connection by clicking Add New Data Source Database DataSet New Connection and when I click on New Connection the screen disappears and I am not able to select anything. Does anyone know of a glitch or something? I have checked my ODBC connections and all is good and I have been able to play around with my Access connections (which I am trying to connect) and Queries and everything seems to be working fine. I have rebooted several times, uninstalled and resinstalled VB and have also repaired the entire application. I am not sure what else to try or what else to do. Any help would be much appreciated. My computer specs are XP SP3, Core2 Duo at 2.80 and 3GB RAM

    Read the article

  • PASS: 2013 Summit Location

    - by Bill Graziano
    HQ recently posted a brief update on our search for a location for 2013.  It includes links to posts by four Board members and two community members. I’d like to add my thoughts to the mix and ask you a question.  But I can’t give you a real understanding without telling you some history first. So far we’ve had the Summit in Chicago, San Francisco, Orlando, Dallas, Denver and Seattle.  Each has a little different feel and distinct memories.  I enjoyed getting drinks by the pool in Orlando after the sessions ended.  I didn’t like that our location in Dallas was so far away from all the nightlife.  Denver was in downtown but we had real challenges with hotels.  I enjoyed the different locations.  I always enjoyed the announcement during the third keynote with the location of the next Summit. There are two big events that impacted my thinking on the Summit location.  The first was our transition to the new management company in early 2007.  The event that September in Denver was put on with a six month planning cycle by a brand new headquarters staff.  It wasn’t perfect but came off much better than I had dared to hope.  It also moved us out of the cookie cutter conferences that we used to do into a model where we have a lot more control.  I think you’ll all agree that the production values of our last few Summits have been fantastic.  That Summit also led to our changing relationship with Microsoft.  Microsoft holds two seats on the PASS Board.  All the PASS Board members face the same challenge: we all have full-time jobs and PASS comes in second place professionally (or sometimes further back).  Starting in 2008 we were assigned a liaison from Microsoft that had a much larger block of time to coordinate with us.  That changed everything between PASS and Microsoft.  Suddenly we were talking to product marketing, Microsoft PR, their event team, the Tech*Ed team, the education division, their user group team and their field sales team – locally and internationally.  We strengthened our relationship with CSS, SQLCAT and the engineering teams.  We had exposure at the executive level that we’d never had before.  And their level of participation at the Summit changed from under 100 people to 400-500 people.  I think those 400+ Microsoft employees have value at a conference on Microsoft SQL Server.  For the first time, Seattle had a real competitive advantage over other cities. I’m one that looked very hard at staying in Seattle for a long, long time.  I think those Microsoft engineers have value to our attendees.  I think the increased support that Microsoft can provide when we’re in Seattle has value to our attendees.  But that doesn’t tell the whole story.  There’s a significant (and vocal!) percentage of our membership that wants the Summit outside Seattle.  Post-2007 PASS doesn’t know what it’s like to have a Summit outside of Seattle.  I think until we have a Summit in another city we won’t really know the trade-offs. I think a model where we move every third or every other year is interesting.  But until we have another Summit outside Seattle and we can evaluate the logistics and how important it is to have depth and variety in our Microsoft participation we won’t really know. Another benefit that comes with a move is variety or diversity.  I learn more when I’m exposed to new things and new people.  I believe that moving the Summit will give a different set of people an opportunity to attend. Grant Fritchey writes “It seems that the board is leaning, extremely heavily, towards making it a permanent fixture in Seattle.”  I don’t believe that’s true.  I know there was discussion of that earlier but I don’t believe it’s true now. And that brings me to my question.  Do we announce the city now or do we wait until the 2012 Summit?  I’m happy to announce Seattle vs. not-Seattle as soon as we sign the contract.  But I’d like to leave the actual city announcement until the 2011 Summit.  I like the drama and mystery of it.  I also like that it doesn’t give you a reason to skip a Summit and wait for the next one if it’s closer or back in Seattle.  The other side of the coin is that your planning is easier if you know where it is.  What do you think?

    Read the article

  • Sound driver for motherboard gigabyte ga-g1975x-c (Creative Sound Blaster Live 24-bit) (alsa, ca0106)

    - by Mikl
    My motherboard is gigabyte ga-g1975x-c with integrated audio "Creative Sound Blaster Live 24-bit". I have installed Ubuntu 10.10, and there was no sound at all. Alsa drivers was already installed. Finally after long searching, i have found how to make my sound work. /etc/modprobe.d/alsa-base.conf: options snd-ca0106 subsystem=0x10121102 //i have added this line and restart After restarting my speakers and microfon works fine. Maybe somebody knows different/better subsystem code for my sound card???

    Read the article

  • How to clear the resent server name list in SQL Server Management Studio

    - by Pavan Kumar Pabothu
    If you are using SQL Server management Studio much the we can observer that the list of server names in the log in of it. As you can imagin a period of time after 6 month or 1 year you will see a long list of server names in the login dialog. How to clear this list...? I doesn't provide a mechanism to clean nor clear the list, so you'll have to do a little browsing through your file system. For SQl Server 2005 Management Studio, we should delete the below file C:\Documents and Settings\<user>\Application Data\Microsoft\Microsoft SQL Server\90\Tools\Shell\mru.dat. For SQl Server 2008 Management Studio, we should delete the below file C:\Documents and Settings\<user>\Application Data\Microsoft\Microsoft SQL Server\90\Tools\Shell\SQLStudio.bin. After deletion we can re-login the Management studio and can see the empty list.

    Read the article

  • Oracle Database 11g Release 2 for Windows available!

    - by Mike Dietrich
    Hi there, just returned from vacation - and the Easter bunny (was its name Tux??) just delivered the Windows release (32bit and 64bit) of Oracle Database 11g Release 2. It's available for download from edelivery.oracle.com or OTN: Oracle Database 11g Release 2 for Windows 32-bit Oracle Database 11g Release 2 for Windows 64-bit And if you wonder yourself why it took sooooooo long to release Oracle Database 11g Release 2 on the Windows platform: The developers have incorporated a lot of the available fixes on top of 11.2.0.1.0 - so it's more a 11.2.0.1.1/2 ;-) And don't forget to download the newest version of rhe upgrade slides: http://apex.oracle.com/folien Use the keyword (Schluesselwort): upgrade112

    Read the article

  • StorageTek SL8500 Release 8.3 available

    - by uwes
    Boosting Performance and Enhancing Reliability with StorageTek SL8500 Release 8.3! We’re pleased to announce the availability of SL8500 8.3 firmware, which supports partitioning for library complexes, library media validation, drive tray serial number reporting, and StorageTek T10000D tape drives! StorageTek SL8500 8.3 support the following: Library Complex Partitioning: Provides support for partitioning across an SL8500 library complex  Supports up to 16 partitions per library complex   Library Media Validation: Utilizing StorageTek Library Console, users can initiate media verifications with our StorageTek T10000C/D tape drives on StorageTek T10000 T1 and T2 media  Supports 3 scan options: basic verify, standard verify and complete verify Please read the Sales Bulletin (Firmware reales 8.31) on Oracle HW TRC for more details. (If you are not registered on Oracle HW TRC, click here ... and follow the instructions..) For More Information Go To: Oracle.com Tape Page Oracle Technology Network Tape Page

    Read the article

  • Is there any working, usable video editing software for ubuntu?

    - by matteo
    Is there any decently stable video editing software for Ubuntu, that won't crash at every mouse click and that you can actually use for doing things that you need? (as opposed to for the sake of testing it) I've tried out pitivi and cinelerra but they are completely unstable (they both crash very often) besides having very poorly designed interfaces... Is there any better option or are we still stuck to Windows and Mac OS for even the most basic (but real-life) video editing? Even something as basic as Avidemux would be fine in many situations (not all) if only it worked (i.e. if not rendered completely useless by its bugs).

    Read the article

  • Does Ubuntu Touch consume less than Android?

    - by Eduard Florinescu
    One of the problems of new OSs is power consumption. That is because power and performance requires a lot of tweaks and experience with the kernel, drivers and OS code-base on one hand, and a lot of extensive long-term test and quality assurance on the other hand. Given that Android is a rather old and established OS I saw that it has pretty good power consumption. Phoronix does this kind of comparissions but I was not able to find to much about Ubuntu Touch. Does Ubuntu Touch consume less than Android in general, do you have data on some platforms compared?

    Read the article

  • Row Count Plus Transformation

    As the name suggests we have taken the current Row Count Transform that is provided by Microsoft in the Integration Services toolbox and we have recreated the functionality and extended upon it. There are two things about the current version that we thought could do with cleaning up Lack of a custom UI You have to type the variable name yourself In the Row Count Plus Transformation we solve these issues for you. Another thing we thought was missing is the ability to calculate the time taken between components in the pipeline. An example usage would be that you want to know how many rows flowed between Component A and Component B and how long it took. Again we have solved this issue. Credit must go to Erik Veerman of Solid Quality Learning for the idea behind noting the duration. We were looking at one of his packages and saw that he was doing something very similar but he was using a Script Component as a transformation. Our philosophy is that if you have to write or Copy and Paste the same piece of code more than once then you should be thinking about a custom component and here it is. The Row Count Plus Transformation populates variables with the values returned from; Counting the rows that have flowed through the path Returning the time in seconds between when it first saw a row come down this path and when it saw the final row. It is possible to leave both these boxes blank and the component will still work.   All input columns are passed through the transformation unaltered, you are not permitted to change or add to the inputs or outputs of this component. Optionally you can set the component to fire an event, which happens during the PostExecute phase of the execution. This can be useful to improve visibility of this information, such that it is captured in package logging, or can be used to drive workflow in the case of an error event. Properties Property Data Type Description OutputRowCountVariable String The name of the variable into which the amount of row read will be passed (Optional). OutputDurationVariable String The name of the variable into which the duration in seconds will be passed. (Optional). EventType RowCountPlusTransform.EventType The type of event to fire during post execute, included in which are the row count and duration values. RowCountPlusTransform.EventType Enumeration Name Value Description None 0 Do not fire any event. Information 1 Fire an Information event. Warning 2 Fire a Warning event. Error 3 Fire an Error event. Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. For 2005/2008 Only - Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Row Count Plus Transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations, and this component requires a minimum of SQL Server 2005 Service Pack 1. Downloads The Row Number Transformation is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Row Count Plus Transformation for SQL Server 2005 Row Count Plus Transformation for SQL Server 2008 Row Count Plus Transformation for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.6 - SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2012) SQL Server 2008 Version 2.0.0.5 - SQL Server 2008 release. (15 Oct 2008) SQL Server 2005 Version 1.1.0.43 - Bug fix for duration. For long running processes the duration second count may have been incorrect. (8 Sep 2006) Version 1.1.0.42 - SP1 Compatibility Testing. Added the ability to raise an event with the count and duration data for easier logging or workflow. (18 Jun 2006) Version 1.0.0.1 - SQL Server 2005 RTM. Made available as general public release. (20 Mar 2006) Screenshot Troubleshooting Make sure you have downloaded the version that matches your version of SQL Server. We offer separate downloads for SQL Server 2005, SQL Server 2008 and SQL Server 2012. If you get an error when you try and use the component along the lines of The component could not be added to the Data Flow task. Please verify that this component is properly installed.  ... The data flow object "Konesans ..." is not installed correctly on this computer, this usually indicates that the internal cache of SSIS components needs to be updated. This is held by the SSIS service, so you need restart the the SQL Server Integration Services service. You can do this from the Services applet in Control Panel or Administrative Tools in Windows. You can also restart the computer if you prefer. You may also need to restart any current instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Once installation is complete you need to manually add the task to the toolbox before you will see it and to be able add it to packages - How do I install a task or transform component?

    Read the article

  • What are you telling yourself if you can't understand new concept, paradigm, feature ... ?

    - by Freshblood
    Programming always required to learn new concepts, paradigms, features and technologies and I always have been failed at first attempt to understand new concept what i encounter. I start to blame and humiliate myself without remember before how i understood new concept which i hadn't understand it before. I can hardly stop to tell myself "why i cant understand ? Am i stupid or idiot ? Yes, i am stuppiiddddd!!!" What your inner voice tells if you can not understand new concept after spend long time till been tired or hopeless ? How do you handle your self-esteem in such situations ?

    Read the article

  • Is the Mailchimp API available in other languages?

    - by boundaryfunctions
    I'm using the Mailchimp API in combination with PHP and jQuery to provide the subscribing/unsubscribing-actions on a website via Ajax. On errors with user data you get useful messages like "Invalid Email Address", "[email protected] is already subscribed to list x. Click here to update your profile." or "There is no record of "[email protected]" in the database". For sure I want to keep theses messages, but is there a way I can get them in other languages (in particular in German)? How would I achieve this? I wasn't able to find anything about in the Mailchimp docs. I wouldn't like to translate them myself...

    Read the article

  • Unit testing time-bound code

    - by maasg
    I'm currently working on an application that does a lot of time-bound operations. That is, based on long now = System.currentTimeMillis();, and combined with an scheduler, it will calculate periods of time that parametrize the execution of some operations. e.g.: public void execute(...) { // executed by an scheduler each x minutes final int now = (int) TimeUnit.MILLISECONDS.toSeconds(System.currentTimeMillis()); final int alignedTime = now - now % getFrequency() ; final int startTime = alignedTime - 2 * getFrequency(); final int endTimeSecs = alignedTime - getFrequency(); uploadData(target, startTime, endTimeSecs); } Most parts of the application are unit-tested independently of time (in this case, uploadData has a natural unit test), but I was wondering about best practices for testing time-bound parts that rely on System.currentTimeMillis() ?

    Read the article

  • WebCast: WebCenter Portal, April 03rd, 2012

    - by rituchhibber
    Our next WebCenter Portal webcast will be on April 03rd, 2012. This WebCast will help you to prepare yourself for the WebCenter Portal Certified Implementation Specialist EXAM. Webcast Details: Date Topic Speaker Web Call Details Intercall Details  April 03rd                WebCenter Portal   Refresh     Course      Yannick Ongena, InfoMentumWebCenter Portal Specialized Partner    Join Webcast Dial-in numbers:CC/SP: 1579222/9221 Time: 12:00 -15:00 CET Break around 13:30 Conference ID/Key: 9729232/0304 For more details, please click here.

    Read the article

  • Code Monster Helps Introduce Kids (and Curious Adults) to the Basics of Programming

    - by Jason Fitzpatrick
    If you’re looking for a fun way to introduce a kid to programming (or sate your own curiosity), Crunchzilla’s Code Monster is a real-time introduction to basic programming concepts. How does Code Monster work? Users are guided through the programming experience (using JavaScript) by a talkative blue monster that asks questions about the code and suggests courses of action. Play long enough and you travel from simple variables to more complex ideas like conditionals, expressions, and more. It’s not a comprehensive programming curriculum (nor does it claim to be) but it’s a great way to introduce people of all ages to programming. Hit up the link below to take it for a spin. Code Monster [via O'Reilly Radar] 8 Deadly Commands You Should Never Run on Linux 14 Special Google Searches That Show Instant Answers How To Create a Customized Windows 7 Installation Disc With Integrated Updates

    Read the article

  • The Best Application Launchers and Docks for Organizing Your Desktop

    - by Lori Kaufman
    Is your desktop so cluttered you can’t find anything? Is your Start menu so long you have to scroll to see what programs are there? If so, you probably need an application launcher to organize your desktop and make your life easier. We’ve created a list of many useful application launchers in different forms. You can choose from dock programs, portable application launchers, Start menu and Taskbar replacements, and keyboard-oriented launchers. HTG Explains: What Is RSS and How Can I Benefit From Using It? HTG Explains: Why You Only Have to Wipe a Disk Once to Erase It HTG Explains: Learn How Websites Are Tracking You Online

    Read the article

  • To Catch A Thief at Microsoft DevDays 2010

    Here's a quick update. I was down at a nice reception at the hotel for the conference speakers when a door is violently thrown open and a guys goes running through and down the hall. Following closely behind was a security guard. I immediately took off running after both of them. We tore down a long hallway and out the door of the hotel into the street. I had caught up to the security guard, but the thief had put a little distance between himself and the guard. The guard gave up the chase. The crook...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • tail stops displaying in case of a log rotation

    - by Rudy Vissers
    I have to tail the log of a server (servicemix) and the log rotation is enabled. As soon as the rotation happens, tail stops displaying. I did some investigations and it is a bug in Debian : Debian Bug Report. The bug has been around for a long time ago. Does anyone knows if this bug in Ubuntu is to be fixed? I'm on Ubuntu 12.04 64 bit. I don't have to mention that this bug is total hell! Every time I have the problem, I have to interrupt the command tail and re-execute the command!

    Read the article

  • Webpage redirection time

    - by Abhijeet Ashok Muneshwar
    I want to calculate time consumed in redirecting from 1 webpage to another webpage. For Example: 1) I am using Facebook in Google Chrome browser. I have shared 1 link on my Facebook profile like below: http://www.webdeveloper.com/ (It's not only Facebook. It can be any domain having link to another domain). 2) When I click on this link from my Facebook profile, then this website will open in new tab. 3) I want to calculate time difference in miliseconds or microseconds between below two events: First Event: Time of clicking link "http://www.webdeveloper.com/" from my Facebook profile. Second Event: Time of completely loading webpage of "http://www.webdeveloper.com/". Thank you in advance.

    Read the article

  • Webpage redirection time

    - by Abhijeet Ashok Muneshwar
    I want to calculate time consumed in redirecting from 1 webpage to another webpage. For Example: 1) I am using Facebook in Google Chrome browser. I have shared 1 link on my Facebook profile like below: http://www.webdeveloper.com/ (It's not only Facebook. It can be any domain having link to another domain). 2) When I click on this link from my Facebook profile, then this website will open in new tab. 3) I want to calculate time difference in miliseconds or microseconds between below two events: First Event: Time of clicking link "http://www.webdeveloper.com/" from my Facebook profile. Second Event: Time of completely loading webpage of "http://www.webdeveloper.com/". Thank you in advance.

    Read the article

  • Switch Gmail Icons Back to Text Labels

    - by Jason Fitzpatrick
    If Gmail’s icon-based buttons annoy you, it’s now possible to switch them back to the old text labels with a simple settings toggle. At MakeUseOf they highlight the new option in Gmail and how you can switch back to the old button layout: So how do you make that happen? All you have to do is click on the cog button, choose “Settings”, and go to the the General tab. Scroll down to find the “Button labels” setting, and change it from icons to text. I know what I’ll be doing shortly; text-based button labels here I come. [via MakeUseOf] Make Your Own Windows 8 Start Button with Zero Memory Usage Reader Request: How To Repair Blurry Photos HTG Explains: What Can You Find in an Email Header?

    Read the article

  • Are there sources of email marketing data available?

    - by Gortron
    Are sources of email marketing data available to the public? I would like to see email marketing data to see what kind of content a business sends out, the frequency of sending, the number of people emailed, especially the resulting open rates and click through rates. Are businesses willing to share data on their previous email marketing campaigns without divulging their contact list? I would like to use this data to create an application to help businesses create better newsletters by using this data as a benchmark, basically sharing what works and what doesn't for each industry.

    Read the article

< Previous Page | 495 496 497 498 499 500 501 502 503 504 505 506  | Next Page >