Search Results

Search found 21436 results on 858 pages for 'draw order'.

Page 434/858 | < Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >

  • Virus that duplicates word documents as exe

    - by Bob Rivers
    Hi, We are facing a virus problem on our network, but I'm unable to identify it, so we can't properly deal with it. The symptoms are that the virus duplicates a word document (.doc) generating a new archive with the same name, but with an exe extension, and, after that, the virus hides the original file. So, when the user clicks over the file, it propagates itself. Symantec AV seems to be able to block it: every time that the virus tries to generate the exe, symantec blocks it, but at this point, the original file was already converted to hidden, so the user thinks that the file has been deleted. Symantec identifies it as a simple trojan horse. I already started a full scan, but it didn't found nothing. I'm trying to know the virus name in order to fight it. Does anyone has any kind of information? TIA, Bob

    Read the article

  • SQL SERVER – Identify Most Resource Intensive Queries – SQL in Sixty Seconds #029 – Video

    - by pinaldave
    There are a few questions I often get asked. I wonder how interesting is that in our daily life all of us have to often need the same kind of information at the same time. Here is the example of the similar questions: How many user created tables are there in the database? How many non clustered indexes each of the tables in the database have? Is table Heap or has clustered index on it? How many rows each of the tables is contained in the database? I finally wrote down a very quick script (in less than sixty seconds when I originally wrote it) which can answer above questions. I also created a very quick video to explain the results and how to execute the script. Here is the complete script which I have used in the SQL in Sixty Seconds Video. SELECT [schema_name] = s.name, table_name = o.name, MAX(i1.type_desc) ClusteredIndexorHeap, COUNT(i.TYPE) NoOfNonClusteredIndex, p.rows FROM sys.indexes i INNER JOIN sys.objects o ON i.[object_id] = o.[object_id] INNER JOIN sys.schemas s ON o.[schema_id] = s.[schema_id] LEFT JOIN sys.partitions p ON p.OBJECT_ID = o.OBJECT_ID AND p.index_id IN (0,1) LEFT JOIN sys.indexes i1 ON i.OBJECT_ID = i1.OBJECT_ID AND i1.TYPE IN (0,1) WHERE o.TYPE IN ('U') AND i.TYPE = 2 GROUP BY s.name, o.name, p.rows ORDER BY schema_name, table_name Related Tips in SQL in Sixty Seconds: Find Row Count in Table – Find Largest Table in Database Find Row Count in Table – Find Largest Table in Database – T-SQL Identify Numbers of Non Clustered Index on Tables for Entire Database Index Levels, Page Count, Record Count and DMV – sys.dm_db_index_physical_stats Index Levels and Delete Operations – Page Level Observation What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • SQL SERVER – 2012 – Summary of All the Analytic Functions – MSDN and SQLAuthority

    - by pinaldave
    SQL Server 2012 (RC0 Available here) has introduced new analytic functions. These functions were long awaited and I am glad that they are here. Previously when any of this function was needed people use to write long T-SQL code to simulate that and now no need of the same. Having available native function also helps performance as well readability. In last few days I have written many articles on this subject on my blog. The goal was make these complex analytic functions easy to understand and make it widely accepted. As this new functions are available and as awareness spreads we should start using the new functions. Here is the quick list of the new function and relevant MSDN site. Function SQLAuthority MSDN CUME_DIST CUME_DIST CUME_DIST FIRST_VALUE FIRST_VALUE FIRST_VALUE LAST_VALUE LAST_VALUE LAST_VALUE LEAD LEAD LEAD LAG LAG LAG PERCENTILE_CONT PERCENTILE_CONT PERCENTILE_CONT PERCENTILE_DISC PERCENTILE_DISC PERCENTILE_DISC PERCENT_RANK PERCENT_RANK PERCENT_RANK I also enjoyed three different puzzles during the course of this series which gave clear idea to the SQL Server 2012 analytic functions. SQL SERVER – Puzzle to Win Print Book – Functions FIRST_VALUE and LAST_VALUE with OVER clause and ORDER BY SQL SERVER – Puzzle to Win Print Book – Write T-SQL Self Join Without Using LEAD and LAG SQL SERVER – Puzzle to Win Print Book – Explain Value of PERCENTILE_CONT() Using Simple Example This series will be always my dear series as during this series I had went through very unique experience of my book going out of stock and becoming available after 48 hours. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • .htaccess to block by file name possible?

    - by Tiffany Walker
    I have a bunch of files that are secure_xxxxxx.php. Is there a way to use .htaccess to block access to all the secure_* php files based on IP? EDIT: I've tried but I get 500 errors <FilesMatch "^secure_.*\.php$"> order deny all deny from all allow from my ip here </FilesMatch> Don't see any errors in apache error logs either httpd -M Loaded Modules: core_module (static) authn_file_module (static) authn_default_module (static) authz_host_module (static) authz_groupfile_module (static) authz_user_module (static) authz_default_module (static) auth_basic_module (static) include_module (static) filter_module (static) log_config_module (static) logio_module (static) env_module (static) expires_module (static) headers_module (static) setenvif_module (static) version_module (static) proxy_module (static) proxy_connect_module (static) proxy_ftp_module (static) proxy_http_module (static) proxy_scgi_module (static) proxy_ajp_module (static) proxy_balancer_module (static) ssl_module (static) mpm_prefork_module (static) http_module (static) mime_module (static) dav_module (static) status_module (static) autoindex_module (static) asis_module (static) info_module (static) suexec_module (static) cgi_module (static) dav_fs_module (static) negotiation_module (static) dir_module (static) actions_module (static) userdir_module (static) alias_module (static) rewrite_module (static) so_module (static) fastinclude_module (shared) auth_passthrough_module (shared) bwlimited_module (shared) frontpage_module (shared) suphp_module (shared) Syntax OK

    Read the article

  • Fix: Azure Disabled over 49 cents? Beware of using a Java Virtual Machine on Microsoft Azure

    - by Ken Cox [MVP]
    I love my MSDN Azure account. I can spin up a demo/dev app or VM in seconds. In fact, it is so easy to create a virtual machine that Azure shut down my whole account! Last night I spun up a Java Virtual Machine to play with some Android stuff. My mistake was that I didn’t read the Virtual Machine pricing warning: “I have a MSDN Azure Benefit subscription. Can I use my monthly Azure credits to purchase Oracle software?” “No, Azure credits in our MSDN offers are not applicable to Oracle software. In order to purchase Oracle software in the MSDN Azure Benefit subscription, customers need to turn off their {0} spending limit and pay at the regular pay-as-you-go rate. Otherwise, Oracle usage will hit the {1} spending limit and the subscription will be immediately disabled.”  Immediately disabled? Yup. Everything connected to the subscription was shut off, deallocated, rendered useless - even the free Web sites and the free Sendgrid email service.  The fix? I had to remove the spending limit from my account so I could pay $0.49 (49 cents) for the JVM usage. I still had $134.10 in credits remaining for regular usage with 6 days left in the billing month.  Now the restoration/clean-up begins… figuring out how to get the web sites and services back online.  To me, the preferable way would be for Azure to warn me when setting up a JVM that I had no way of paying for the service. In the alternative, shut down just the offending services – the ones that can’t be covered by the regular credits. What a mess.

    Read the article

  • What is the best way to build a database from a MS Word document?

    - by Jayron Soares
    Please advise me on how to approach this problem: I have a sequential list of metadata in a document in MS Word. The basic idea is to create a Python algorithm to iterate over the information, retrieving just the name of the PROCESS, when is made a queue, from a database. Example metadata: Process: Process Walker (1965) Exact reference: Walker Process Equipment., Inc. v. Food Machinery Corp. Link: http://caselaw.lp.findlaw.com/scripts/getcase.pl?court=US&vol=382&invol= Type of procedure: Certiorari to the United States Court of Appeals for the Seventh Circuit. Parties: Walker Process Equipment, Inc. Sector: Systems is ... Start Date: October 12-13 Arguedas, 1965 Summary: Food Machinery Company has initiated a process to stop or slow the entry of competitors through the use of a patent obtained by fraud. The case concerned a patent on "knee action swing diffusers" used in aeration equipment for sewage treatment systems, and the question was whether "the maintenance and enforcement of a patent obtained by fraud before the patent office" may be a basis for antitrust punishment. Report of the evolution process: petitioner, in answer to respond... Importance: a) First case which established an analysis for the diagnosis of dispute… There are about 200 pages containing the information above. I have in mind the idea of implementing an algorithm in Python to be able to break this information sequence and try to store it in a web database (an open source application that I’m looking for) in order to allow for free consultations.

    Read the article

  • SQL SERVER – 2008 – Unused Index Script – Download

    - by pinaldave
    Download Missing Index Script with Unused Index Script Performance Tuning is quite interesting and Index plays a vital role in it. A proper index can improve the performance and a bad index can hamper the performance. Here is the script from my script bank which I use to identify unused indexes on any database. Please note, if you should not drop all the unused indexes this script suggest. This is just for guidance. You should not create more than 5-10 indexes per table. Additionally, this script sometime does not give accurate information so use your common sense. Any way, the scripts is good starting point. You should pay attention to User Scan, User Lookup and User Update when you are going to drop index. The generic understanding is if this values are all high and User Seek is low, the index needs tuning. The index drop script is also provided in the last column. Download Missing Index Script with Unused Index Script -- Unused Index Script -- Original Author: Pinal Dave (C) 2011 SELECT TOP 25 o.name AS ObjectName , i.name AS IndexName , i.index_id AS IndexID , dm_ius.user_seeks AS UserSeek , dm_ius.user_scans AS UserScans , dm_ius.user_lookups AS UserLookups , dm_ius.user_updates AS UserUpdates , p.TableRows , 'DROP INDEX ' + QUOTENAME(i.name) + ' ON ' + QUOTENAME(s.name) + '.' + QUOTENAME(OBJECT_NAME(dm_ius.OBJECT_ID)) AS 'drop statement' FROM sys.dm_db_index_usage_stats dm_ius INNER JOIN sys.indexes i ON i.index_id = dm_ius.index_id AND dm_ius.OBJECT_ID = i.OBJECT_ID INNER JOIN sys.objects o ON dm_ius.OBJECT_ID = o.OBJECT_ID INNER JOIN sys.schemas s ON o.schema_id = s.schema_id INNER JOIN (SELECT SUM(p.rows) TableRows, p.index_id, p.OBJECT_ID FROM sys.partitions p GROUP BY p.index_id, p.OBJECT_ID) p ON p.index_id = dm_ius.index_id AND dm_ius.OBJECT_ID = p.OBJECT_ID WHERE OBJECTPROPERTY(dm_ius.OBJECT_ID,'IsUserTable') = 1 AND dm_ius.database_id = DB_ID() AND i.type_desc = 'nonclustered' AND i.is_primary_key = 0 AND i.is_unique_constraint = 0 ORDER BY (dm_ius.user_seeks + dm_ius.user_scans + dm_ius.user_lookups) ASC GO Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Download, SQL Index, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Tweaking Firefox for Performance

    - by Simon Sheehan
    As an avid Firefox user since it began, I've been looking to make some under the hood changes to it, in order to optimize it for speed and performance. I'd also like to limit my RAM usage with it. Are there any settings that can help this? What can be changed in about:config that affects this? I'd also like to know if themes or anything really boost RAM usage, as they are generally very small files to download. Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:7.0a1) Gecko/20110630 Firefox/7.0a1

    Read the article

  • PDF Converter Elite Giveaway – Lets you create, convert and edit any type of PDF with ease

    - by Gopinath
    Are you looking for a PDF editing software that lets you create, edit and convert  any type of PDF with ease? Then here is a chance for you to win a lifetime free license of PDF Converter Elite software. Tech Dreams in partnership with pdfconverter.com  brings a giveaway contest exclusively for our readers. Continue reading to know the features of the application and giveaway contest details Adobe Acrobat  is the best software for creating, editing and converting PDF files, but you need spend a lot of money to buy it. PDF Converter Elite, which is priced at $100 has a rich set of features that satisfies most of your PDF management needs. Here is a quick run down of the feature of the application Create PDF files from almost every popular Windows file format – You can create a PDF  from almost 300 popular file formats supported by Windows. Want to convert a word document to PDF? It’s just a click away. How about converting Excels, PowerPoint presentations, text files, images, etc? Yes, with a single click you will be able to turn them to PDF Files. Convert PDF to Word, Excel, PowerPoint, Publisher, HTML – This is one of the best features i liked in this software. You can convert a PDF to any MS Office file format without loosing alignment and quality of the document. The converted documents looks exactly same as your PDF documents and you would be surprised to see near 100% layout replication in the converted document. I feel in love with the perfection at which the files are converted. Edit PDF files easily – You can rework with your PDF documents by inserting watermarks, numbers, headers, footers and more. Also you will be able to merge two PDF files, overlay pages, remove unwanted pages, split a single PDF in to multiple files. Secure PDF files by setting password – You can secure PDF files by limiting how others can use them – set password to open the documents, restrict various activities like printing, copy & paste, screen reading, form filling, etc.. If you are looking for an affordable PDF editing application then PDF Converter Elite is there for you. 10 x PDF Converter Elite Licenses Giveaway Here comes the details on wining a free single user license for our readers – we have 10 PDF Converter Elite single user licenses worth of $100 each. To win a license all you need to do is Like Tech Dreams Fan page on Facebook Tweet or Like this post – buttons are available just below the post heading in the top section of this page Finally drop a comment on how you would like to use PDF Converter Elite We will choose 10 winners through a lucky draw and the licenses will be sent to them in a personal email. Names of the winners will also be announced on Tech Dreams. So are you ready to grab a free copy of PDF Converter worth of $100?

    Read the article

  • MSCC: Purpose and benefits of Version Control Systems (VCS)

    Unfortunately, there was no monthly meetup during May. Which means that it was even more important and interesting to go forward with a great topic for this month. Earlier this year I already spoke to Nayar Joolfoo about doing a presentation on version control systems (VCS), and he gladly agreed since then. It was just about finding the right date for the action. Furthermore, it was also a great coincidence that Avinash Meetoo announced on social media networks that Knowledge 7 is about to have a new training on "Effective git" - which correlates to a book title Avinash is currently working on - all the best with your approach on this and reach out to our MSCC craftsmen for recessions. Once again a big Thank you to Orange Ebene Accelerator on providing the venue for us, and the MSCC members involved on securing the time slot for our event. Unfortunately, it's kind of tough to get an early confirmation for our meetups these days. I'll keep you posted on that one as there are some interesting and exciting options coming up soon. Okay, let's talk about the meeting and version control systems again. As usual, I'm going to put my first impression of the meetup: "Absolutely great topic, questions and discussions on version control systems, like git or VSO. I was also highly pleased by the number of first timers and female IT geeks. Hopefully, we will be able to keep this trend for future get-togethers." And I really have to emphasise the amount of fresh blood coming to our gathering. Also, during the initial phase it was surprising to see that exactly those first-timers, most of them students at various campuses here on the island, had absolutely no idea about version control systems. More about further down... Reactions of other attendees If I counted correctly, we had a total of 17 attendees this month, and I'd like to give you feedback from some of them: "Inspiring. Helped me understand more about GIT." -- Sean on event comments "Joined the meetup today with literally no idea what is a version control system. I have several reasons why I should be starting to use VCS as from NOW in my projects. Thanks Nayar, Jochen and other participants :)" -- Yudish on event comments "Was present today and I'm very satisfied.I was not aware if there was a such tool like git available. Thanks to those who contributed for this meetup.It was great. Learned a lot from this meetup!!" -- Leonardo on event comments "Seriously, I can see how it’s going to ease my task and help me save time. Gone are the issues with files backups.  And since I’ll be doing my dissertation this year, using Git would help me a lot for my backups and I’m grateful to Nayar for the great explanation." -- Swan-Iyah on MSCC meetup : Version Controls Hopefully, I'll be able to get some other sources - personal blogs preferred - on our meeting. Geeks, thank you so much for those encouraging comments. It's really great to experience that we, all members of the MSCC, are doing the right thing to get more IT information out, and to help each other to improve and evolve in our professional careers. Our agenda of the day Honestly, we had a bumpy start... First, I was battling a little bit with the movable room divider in order to maximize the space. I mean, we had 24 RSVPs and usually there might additional people coming along. Then, for what ever reason, we were facing power outages - actually twice in short periods. Not too good for the projector after all, but hey it went smooth for the rest of the time being. And last but not least... our first speaker Nayar got stuck somewhere on the road. ;-) Anyway, not a real show-stopper and we used the time until Nayar's arrival to introduce ourselves a little bit. It is always important for me to get to know the "newbies" a little bit, and as a result we had lots of students of university - first year, second year and recent graduates - among them. Surprisingly, none of them was ever in contact with version control systems at all. I mean, this is a shocking discovery! Similar to the ability of touch-typing I'd say that being able to use (and master) any kind of version control system is compulsory in any job in the IT industry. Seriously, I'm wondering what is being taught during the classes on the campus. All of them have to work on semester assessments or final projects, even in small teams of 2-4 people. That's the perfect occasion to get started with VCS. Already in this phase, we had great input from more experienced VCS users, like Sean, Avinash and myself. git - a modern approach to VCS - Nayar What a tour! Nayar gave us the full round of git from start to finish, even touching some more advanced techniques. First, he started to explain about the importance of version control systems as an essential tool for software developers, even working alone on a project, and the ability to have a kind of "time machine" that allows you to inspect and revert to a previous version of source code at any time. Then he showed how easy it is to install git on an Ubuntu based system but also mentioned that git is literally available for any operating system, like Windows, Mac OS X and of course other Linux distributions. Next, he showed us how to set the initial configuration values of user name and email address which simplifies the daily usage of the git client while working with your repositories. Then he initialised and added a new repository for some local development of a blogging software. All commands were done using the command line interface (CLI) so that they can be repeated on any system as reference. The syntax and the procedure is always the same, and Nayar clearly mentioned this to the attendees. Now, having a git repository in place it was about time to work on some "important" changes on the blogging software - just for the sake of demonstrating the ease of use and power of git. One interesting question came very early: "How many commands do we have to learn? It looks quite difficult at the moment" - Well, rest assured that during daily development circles you will need less than 10 git commands on a regular base: git add, commit, push, pull, checkout, and merge And Nayar demo'd all of them. Much to the delight of everyone he also showed gitk which is the git repository browser. It's an UI tool to display changes in a repository or a selected set of commits. This includes visualizing the commit graph, showing information related to each commit, and the files in the trees of each revision. Using gitk to display and browse information of a local git repository And last but not least, we took advantage of the internet connectivity and reached out to various online portals offering git hosting for free. Nayar showed us how to push the local repository into a remote system on github. Showing the web-based git browser and history handling, and then also explained and demo'd on how to connect to existing online repositories in order to get access to either your own source code or other people's open source projects. Next to github, we also spoke about bitbucket and gitlab as potential online platforms for your projects. Have a look at the conditions and details about their free service packages and what you can get additionally as a paying customer. Usually, you already get a lot of services for up to five users for free but there might be other important aspects that might have an impact on your decision. Anyways, moving git-based repositories between systems is a piece of cake, and changing online platforms is possible at any stage of your development. Visual Studio Online (VSO) - Jochen Well, Nayar literally covered all elements of working with git during his session, including the use of external online platforms. So, what would be the advantage of talking about Visual Studio Online (VSO)? First of all, VSO is "just another" online platform for hosting and managing git repositories on remote systems, equivalent to github, bitbucket, or any other web site. At the moment (of writing), Microsoft also provides a free package of up to five users / developers on a git repository but there is more in that package. Of course, it is related to software development on the Windows systems and the bonds are tightened towards the use of Visual Studio but out of experience you are absolutely not restricted to that. Connecting a Linux or Mac OS X machine with a git client or an integrated development environment (IDE) like Eclipse or Xcode works as smooth as expected. So, why should one opt in for VSO? Well, one of the main aspects that I would like to mention here is that VSO integrates the Application Life Cycle Methodology (ALM) of Microsoft in their platform. Meaning that you get agile project management with Backlogs, Sprints, Burn-down charts as well as the ability to track tasks, bug reports and work items next to collaborative team chats. It's the whole package of agile development you'll get. And, something I mentioned briefly during the begin of our meeting, VSO gives you the possibility of an automated continuous integrated (CI) process which builds and can run tests of your source code after each commit of changes. Having a proper CI strategy is also part of the Clean Code Developer practices - on Level Green actually -, and not only simplifies your life as a software developer but also reduces the sources of potential errors. Seamless integration and automated deployment between Microsoft Azure Web Sites and git repository But my favourite feature is the seamless continuous deployment to Microsoft Azure. Especially, while working on web projects it's absolutely astounishing that as soon as you commit your chances it just takes a couple of seconds until your modifications are deployed and available on your Azure-hosted web sites. Upcoming Events and networking Due to the adjusted times, everybody was kind of hungry and we didn't follow up on networking or upcoming events - very unfortunate to my opinion and this will have an impact on future planning of our meetups. Because I rather would like to see more conversations during and at the end of our meetings than everyone just packing their laptops, bags and accessories and rush off to grab some food. I was hoping to get some information regarding this year's Code Challenge - supposedly to be organised during July? Maybe someone could leave a comment on that - but I couldn't get any updates. Well, I'll keep digging... In case that you would like to get more into git and how to use it effectively, please check out Knowledge 7's upcoming course on "Effective git". Thanks Avinash for your vital input into today's conversation and I'm looking forward to get a grip on your book title very soon. My resume of the day Do not work in IT without any kind of version control system! Seriously, without a VCS in place you're doing it wrong. It's like driving a car without seat belts attached or riding your bike without safety helmet. You don't do that! End of discussion. ;-) Nowadays, having access to free (as in cost) tools to install on your machine and numerous online platforms to host your source code for free for up to five users it's a no-brainer to get yourself familiar with VCS. Today's sessions gave a good overview on how to start using git and how to connect to various remote services like github or VSO.

    Read the article

  • Jack of all trades, master of none [closed]

    - by Rope
    I've got a question similar to this one: Is looking for code examples constantly a sign of a bad developer? though not entirely. I got off college 2 years ago and I'm currently struggling with a University study. Most likely I'll have to drop out and start working within the next couple of months. Now here's the pickle. I have no speciality what so ever. When I got out of college I had worked with C, C++ and Java. I had had an internship at NEC-Philips and got familiar with C# (.NET) and I taught myself how it worked. After college I started working with PHP, HTML,SQL, MySQL Javascript and Jquery. I'm currently teaching myself Ruby on Rails and thus Ruby. At my university I also got familiar with MATLAB. As you can see I've got a broad scope of languages and frameworks I'm familiar with, but none I know inside-out. So I guess this kinda applies to me: "Jack of all trades, master of none.". I've been looking for jobs and I've noticed that most of them require some years of experience with a certain language and some specifications that apply to that language. My question is: How do I pick a speciality? And how do I know if I'll actually enjoy it? As I've worked with loads of languages how would I be able to tell this is right for me? I don't like being tied down to a specific role and I quite like being a generalist. But in order to make more money I would need a specialisation. How would I pick something that goes against my nature? Thanks in advance, Rope.

    Read the article

  • Problem restarting my Ubuntu system

    - by VoY
    Whenever I try to restart my Ubuntu either from the command line by typing reboot or in GNOME the computer goes from X to console, starts the shutdown process and then a message saying "[ some number ] Starting new kernel" appears on the screen and the computer goes back to X login screen. I suspect this must have something to do with nvidia drivers, because it seemed to have appeared around the time I bought a new graphic card. Also, when I reboot the second time I see weird graphical artifacts on the screen. When I boot from ubuntu live cd I can reboot just fine. I used jaunty, recently I switched to karmic with no change. This bug is very annoying, because I have to hard reset my computer in order to reboot. Also not good for the filesystems, I suspect. Can you suggest a way to debug the cause or if not at least the easiest way to go about reinstalling ubuntu without losing customizations/settings/data?

    Read the article

  • error exporting data using mysql workbench

    - by Rajneesh Rana
    hi, i have been getting warning of version mismatch when i was trying to export data dump using mysql workbench. So, i copied mysqldump from mysql server folder and placed it in workbench folder. Now when i am trying to export data i am getting error Operation failed with exitcode -1073741819 here is a entry of log 16:31:25 Dumping wordpress (wp_posts) Running: "mysqldump.exe" --defaults-extra-file="c:\docume~1\rajneesh.r\locals~1\temp\1\tmpxau7tz" --no-create-info=FALSE --order-by-primary=FALSE --force=FALSE --no-data=FALSE --tz-utc=TRUE --flush-privileges=FALSE --compress=FALSE --replace=FALSE --host=localhost --insert-ignore=FALSE --extended-insert=TRUE --user=root --quote-names=TRUE --hex-blob=FALSE --complete-insert=FALSE --add-locks=TRUE --port=3306 --disable-keys=TRUE --delayed-insert=FALSE --create-options=TRUE --delete-master-logs=FALSE --comments=TRUE --default-character-set=utf8 --max_allowed_packet=1G --flush-logs=FALSE --dump-date=TRUE --lock-tables=TRUE --allow-keywords=FALSE --events=FALSE "wordpress" "wp_posts" Operation failed with exitcode -1073741819 Please help me with these issues Thank You

    Read the article

  • SQL SERVER – Capturing Wait Types and Wait Stats Information at Interval – Wait Type – Day 5 of 28

    - by pinaldave
    Earlier, I have tried to cover some important points about wait stats in detail. Here are some points that we had covered earlier. DMV related to wait stats reset when we reset SQL Server services DMV related to wait stats reset when we manually reset the wait types However, at times, there is a need of making this data persistent so that we can take a look at them later on. Sometimes, performance tuning experts do some modifications to the server and try to measure the wait stats at that point of time and after some duration. I use the following method to measure the wait stats over the time. -- Create Table CREATE TABLE [MyWaitStatTable]( [wait_type] [nvarchar](60) NOT NULL, [waiting_tasks_count] [bigint] NOT NULL, [wait_time_ms] [bigint] NOT NULL, [max_wait_time_ms] [bigint] NOT NULL, [signal_wait_time_ms] [bigint] NOT NULL, [CurrentDateTime] DATETIME NOT NULL, [Flag] INT ) GO -- Populate Table at Time 1 INSERT INTO MyWaitStatTable ([wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], [CurrentDateTime],[Flag]) SELECT [wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], GETDATE(), 1 FROM sys.dm_os_wait_stats GO ----- Desired Delay (for one hour) WAITFOR DELAY '01:00:00' -- Populate Table at Time 2 INSERT INTO MyWaitStatTable ([wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], [CurrentDateTime],[Flag]) SELECT [wait_type],[waiting_tasks_count],[wait_time_ms],[max_wait_time_ms],[signal_wait_time_ms], GETDATE(), 2 FROM sys.dm_os_wait_stats GO -- Check the difference between Time 1 and Time 2 SELECT T1.wait_type, T1.wait_time_ms Original_WaitTime, T2.wait_time_ms LaterWaitTime, (T2.wait_time_ms - T1.wait_time_ms) DiffenceWaitTime FROM MyWaitStatTable T1 INNER JOIN MyWaitStatTable T2 ON T1.wait_type = T2.wait_type WHERE T2.wait_time_ms > T1.wait_time_ms AND T1.Flag = 1 AND T2.Flag = 2 ORDER BY DiffenceWaitTime DESC GO -- Clean up DROP TABLE MyWaitStatTable GO If you notice the script, I have used an additional column called flag. I use it to find out when I have captured the wait stats and then use it in my SELECT query to SELECT wait stats related to that time group. Many times, I select more than 5 or 6 different set of wait stats and I find this method very convenient to find the difference between wait stats. In a future blog post, we will talk about specific wait stats. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • ubuntu one not syncing

    - by Martin
    I am really starting to despair as I have been trying ubuntu one for several months, trying it on several machines, and it has caused me loads of different issues wasting me a lot of time. It is not straight forward to use, it should be a piece of software that runs in background and users should not think about checking all the time if it is really doing it's job. Of course I have been searching around this website and other forums but couldn't find an answer to my situation. Yesterday I had several problems with the client not syncing and using a lot of the machine's RAM, up and CPU. I had to reboot on several occasions and leave the office's PC on overnight in order to sync a few files of not more than a few MB. Today I am experiencing another problem: I have decided to do a test putting a small file in my ubuntu one shared folder. Ubuntu one is not detecting it (now already more than an hour), therefore not uploading it to the server. martin@ubuntu-desktop:~$ u1sdtool --status State: QUEUE_MANAGER connection: With User With Network description: processing the commands pool is_connected: True is_error: False is_online: True queues: IDLE and martin@ubuntu-desktop:~$ u1sdtool --current-transfers Current uploads: 0 Current downloads: 0 I am running Ubuntu 11.04 64 with all recent updates. On my other machine the transfer of files seems to be completely frozen, with around 10 files in the queue but no transfer whatsoever. Another curious issue is on my Ubuntu 10.10 laptop where ubuntu one seems to have completly disappeared from Nautilus context menu, folder/file sync status icons missing. I have therefore been forced to upgrade to 11.04 on this machine. Anyway, now I would like to solve the ** processing the commands pool ** issue and make sure the client

    Read the article

  • Quickly revert an Oracle Database to a known state

    - by Anthony
    I would like to use Selenium to test a web application but in order to do that successfully the tests must be run against a database at a known state. The recording and running of the Selenium tests is not within the scope of this website so I'm only looking for recommendations on how best to revert the database after each test execution. Some details: current database size is 30GB however only about 4GB needs to be reverted database is Oracle 11g Standard Edition running on Windows Server 2003 the data in 6 different schemas needs to be reverted Ideally the process should be scripted so that it can be re-executed frequently and automatically via a scheduled task.

    Read the article

  • iproute2 rules and iptables NAT... what is the difference?

    - by Jakobud
    We have 2 different ISP connections. Our previous "IT guy" setup our firewall like so: When /etc/rc.local was executed on startup, it did a bunch of ip rule add and ip route add commands in order to route certain internal hosts to use certain ISP connections. Then at the end of /etc/rc.local, he executed our iptables firewall rules that were generated by Firewall Builder. These iptables rules have both Policy and NAT rules setup in them. What I don't understand, is why did he use iproute2 to specify rules and routes but also specify NAT rules for iptables? Why didn't he just do it all in one or the other instead of using them both? Could he have got rid of the iproute2 rules and routes and just put all those same rules into the iptables NAT settings?

    Read the article

  • NHibernate Conventions

    - by Ricardo Peres
    Introduction It seems that nowadays everyone loves conventions! Not the ones that you go to, but the ones that you use, that is! It just happens that NHibernate also supports conventions, and we’ll see exactly how. Conventions in NHibernate are supported in two ways: Naming of tables and columns when not explicitly indicated in the mappings; Full domain mapping. Naming of Tables and Columns Since always NHibernate has supported the concept of a naming strategy. A naming strategy in NHibernate converts class and property names to table and column names and vice-versa, when a name is not explicitly supplied. In concrete, it must be a realization of the NHibernate.Cfg.INamingStrategy interface, of which NHibernate includes two implementations: DefaultNamingStrategy: the default implementation, where each column and table are mapped to identically named properties and classes, for example, “MyEntity” will translate to “MyEntity”; ImprovedNamingStrategy: underscores (_) are used to separate Pascal-cased fragments, for example, entity “MyEntity” will be mapped to a “my_entity” table. The naming strategy can be defined at configuration level (the Configuration instance) by calling the SetNamingStrategy method: 1: cfg.SetNamingStrategy(ImprovedNamingStrategy.Instance); Both the DefaultNamingStrategy and the ImprovedNamingStrategy classes offer singleton instances in the form of Instance static fields. DefaultNamingStrategy is the one NHibernate uses, if you don’t specify one. Domain Mapping In mapping by code, we have the choice of relying on conventions to do the mapping automatically. This means a class will inspect our classes and decide how they will relate to the database objects. The class that handles conventions is NHibernate.Mapping.ByCode.ConventionModelMapper, a specialization of the base by code mapper, NHibernate.Mapping.ByCode.ModelMapper. The ModelMapper relies on an internal SimpleModelInspector to help it decide what and how to map, but the mapper lets you override its decisions.  You apply code conventions like this: 1: //pick the types that you want to map 2: IEnumerable<Type> types = Assembly.GetExecutingAssembly().GetExportedTypes(); 3:  4: //conventions based mapper 5: ConventionModelMapper mapper = new ConventionModelMapper(); 6:  7: HbmMapping mapping = mapper.CompileMappingFor(types); 8:  9: //the one and only configuration instance 10: Configuration cfg = ...; 11: cfg.AddMapping(mapping); This is a very simple example, it lacks, at least, the id generation strategy, which you can add by adding an event handler like this: 1: mapper.BeforeMapClass += (IModelInspector modelInspector, Type type, IClassAttributesMapper classCustomizer) => 2: { 3: classCustomizer.Id(x => 4: { 5: //set the hilo generator 6: x.Generator(Generators.HighLow); 7: }); 8: }; The mapper will fire events like this whenever it needs to get information about what to do. And basically this is all it takes to automatically map your domain! It will correctly configure many-to-one and one-to-many relations, choosing bags or sets depending on your collections, will get the table and column names from the naming strategy we saw earlier and will apply the usual defaults to all properties, such as laziness and fetch mode. However, there is at least one thing missing: many-to-many relations. The conventional mapper doesn’t know how to find and configure them, which is a pity, but, alas, not difficult to overcome. To start, for my projects, I have this rule: each entity exposes a public property of type ISet<T> where T is, of course, the type of the other endpoint entity. Extensible as it is, NHibernate lets me implement this very easily: 1: mapper.IsOneToMany((MemberInfo member, Boolean isLikely) => 2: { 3: Type sourceType = member.DeclaringType; 4: Type destinationType = member.GetMemberFromDeclaringType().GetPropertyOrFieldType(); 5:  6: //check if the property is of a generic collection type 7: if ((destinationType.IsGenericCollection() == true) && (destinationType.GetGenericArguments().Length == 1)) 8: { 9: Type destinationEntityType = destinationType.GetGenericArguments().Single(); 10:  11: //check if the type of the generic collection property is an entity 12: if (mapper.ModelInspector.IsEntity(destinationEntityType) == true) 13: { 14: //check if there is an equivalent property on the target type that is also a generic collection and points to this entity 15: PropertyInfo collectionInDestinationType = destinationEntityType.GetProperties().Where(x => (x.PropertyType.IsGenericCollection() == true) && (x.PropertyType.GetGenericArguments().Length == 1) && (x.PropertyType.GetGenericArguments().Single() == sourceType)).SingleOrDefault(); 16:  17: if (collectionInDestinationType != null) 18: { 19: return (false); 20: } 21: } 22: } 23:  24: return (true); 25: }); 26:  27: mapper.IsManyToMany((MemberInfo member, Boolean isLikely) => 28: { 29: //a relation is many to many if it isn't one to many 30: Boolean isOneToMany = mapper.ModelInspector.IsOneToMany(member); 31: return (!isOneToMany); 32: }); 33:  34: mapper.BeforeMapManyToMany += (IModelInspector modelInspector, PropertyPath member, IManyToManyMapper collectionRelationManyToManyCustomizer) => 35: { 36: Type destinationEntityType = member.LocalMember.GetPropertyOrFieldType().GetGenericArguments().First(); 37: //set the mapping table column names from each source entity name plus the _Id sufix 38: collectionRelationManyToManyCustomizer.Column(destinationEntityType.Name + "_Id"); 39: }; 40:  41: mapper.BeforeMapSet += (IModelInspector modelInspector, PropertyPath member, ISetPropertiesMapper propertyCustomizer) => 42: { 43: if (modelInspector.IsManyToMany(member.LocalMember) == true) 44: { 45: propertyCustomizer.Key(x => x.Column(member.LocalMember.DeclaringType.Name + "_Id")); 46:  47: Type sourceType = member.LocalMember.DeclaringType; 48: Type destinationType = member.LocalMember.GetPropertyOrFieldType().GetGenericArguments().First(); 49: IEnumerable<String> names = new Type[] { sourceType, destinationType }.Select(x => x.Name).OrderBy(x => x); 50:  51: //set inverse on the relation of the alphabetically first entity name 52: propertyCustomizer.Inverse(sourceType.Name == names.First()); 53: //set mapping table name from the entity names in alphabetical order 54: propertyCustomizer.Table(String.Join("_", names)); 55: } 56: }; We have to understand how the conventions mapper thinks: For each collection of entities found, it will ask the mapper if it is a one-to-many; in our case, if the collection is a generic one that has an entity as its generic parameter, and the generic parameter type has a similar collection, then it is not a one-to-many; Next, the mapper will ask if the collection that it now knows is not a one-to-many is a many-to-many; Before a set is mapped, if it corresponds to a many-to-many, we set its mapping table. Now, this is tricky: because we have no way to maintain state, we sort the names of the two endpoint entities and we combine them with a “_”; for the first alphabetical entity, we set its relation to inverse – remember, on a many-to-many relation, only one endpoint must be marked as inverse; finally, we set the column name as the name of the entity with an “_Id” suffix; Before the many-to-many relation is processed, we set the column name as the name of the other endpoint entity with the “_Id” suffix, as we did for the set. And that’s it. With these rules, NHibernate will now happily find and configure many-to-many relations, as well as all the others. You can wrap this in a new conventions mapper class, so that it is more easily reusable: 1: public class ManyToManyConventionModelMapper : ConventionModelMapper 2: { 3: public ManyToManyConventionModelMapper() 4: { 5: base.IsOneToMany((MemberInfo member, Boolean isLikely) => 6: { 7: return (this.IsOneToMany(member, isLikely)); 8: }); 9:  10: base.IsManyToMany((MemberInfo member, Boolean isLikely) => 11: { 12: return (this.IsManyToMany(member, isLikely)); 13: }); 14:  15: base.BeforeMapManyToMany += this.BeforeMapManyToMany; 16: base.BeforeMapSet += this.BeforeMapSet; 17: } 18:  19: protected virtual Boolean IsManyToMany(MemberInfo member, Boolean isLikely) 20: { 21: //a relation is many to many if it isn't one to many 22: Boolean isOneToMany = this.ModelInspector.IsOneToMany(member); 23: return (!isOneToMany); 24: } 25:  26: protected virtual Boolean IsOneToMany(MemberInfo member, Boolean isLikely) 27: { 28: Type sourceType = member.DeclaringType; 29: Type destinationType = member.GetMemberFromDeclaringType().GetPropertyOrFieldType(); 30:  31: //check if the property is of a generic collection type 32: if ((destinationType.IsGenericCollection() == true) && (destinationType.GetGenericArguments().Length == 1)) 33: { 34: Type destinationEntityType = destinationType.GetGenericArguments().Single(); 35:  36: //check if the type of the generic collection property is an entity 37: if (this.ModelInspector.IsEntity(destinationEntityType) == true) 38: { 39: //check if there is an equivalent property on the target type that is also a generic collection and points to this entity 40: PropertyInfo collectionInDestinationType = destinationEntityType.GetProperties().Where(x => (x.PropertyType.IsGenericCollection() == true) && (x.PropertyType.GetGenericArguments().Length == 1) && (x.PropertyType.GetGenericArguments().Single() == sourceType)).SingleOrDefault(); 41:  42: if (collectionInDestinationType != null) 43: { 44: return (false); 45: } 46: } 47: } 48:  49: return (true); 50: } 51:  52: protected virtual new void BeforeMapManyToMany(IModelInspector modelInspector, PropertyPath member, IManyToManyMapper collectionRelationManyToManyCustomizer) 53: { 54: Type destinationEntityType = member.LocalMember.GetPropertyOrFieldType().GetGenericArguments().First(); 55: //set the mapping table column names from each source entity name plus the _Id sufix 56: collectionRelationManyToManyCustomizer.Column(destinationEntityType.Name + "_Id"); 57: } 58:  59: protected virtual new void BeforeMapSet(IModelInspector modelInspector, PropertyPath member, ISetPropertiesMapper propertyCustomizer) 60: { 61: if (modelInspector.IsManyToMany(member.LocalMember) == true) 62: { 63: propertyCustomizer.Key(x => x.Column(member.LocalMember.DeclaringType.Name + "_Id")); 64:  65: Type sourceType = member.LocalMember.DeclaringType; 66: Type destinationType = member.LocalMember.GetPropertyOrFieldType().GetGenericArguments().First(); 67: IEnumerable<String> names = new Type[] { sourceType, destinationType }.Select(x => x.Name).OrderBy(x => x); 68:  69: //set inverse on the relation of the alphabetically first entity name 70: propertyCustomizer.Inverse(sourceType.Name == names.First()); 71: //set mapping table name from the entity names in alphabetical order 72: propertyCustomizer.Table(String.Join("_", names)); 73: } 74: } 75: } Conclusion Of course, there is much more to mapping than this, I suggest you look at all the events and functions offered by the ModelMapper to see where you can hook for making it behave the way you want. If you need any help, just let me know!

    Read the article

  • Logging on as root without winbind timeouts

    - by Josh Kelley
    How can I set up my Linux box so that, if the Active Directory domain controller is down, I can still log in as root, without any timeouts or delays? Following the example of most of the documentation out there, I've listed pam_winbind.so before pam_unix.so in my /etc/pam.d configurations. I believe that this is the cause of the problem. I remember seeing alternate /etc/pam.d setups that change the order and maybe add either pam_localuser or pam_succeed_if (to see if the uid is less than 500), but I can't find any specifics now (and I'm not enough of an expert in PAM to quickly and easily come up with a robust configuration on my own). What is the recommended setup for PAM with Winbind to avoid timeouts and delays if Active Directory is unavailable?

    Read the article

  • Vagrant: VirtualBox: Headless Ubuntu: How to set up bridged networking?

    - by Jay Godse
    I am trying to set up a Vagrant VirtualBox (v4.2.4) virtual machine with a Ubuntu "box" which I got from www.vagrantbox.es. I was able to use Vagrant to set it up as a headless box and start it, and then I was able to ssh locally into it (using 127.0.0.1:2222), connect the internet and sun a bunch of "sudo apt-get" commands to update it and install new software. I would like to be able to access this virtual machine on my network, so I need a bridged network adapter for the virtual box. When I went to the VirtualBox console for this device, and tried to set up bridged networking, it said that I needed the "guest additions". I tried to install them and I couldn't get the .iso file for the guest additions. I went elsewhere on the 'net and it seems that I had to run "sudo apt-get install virtualbox-guest-additions-iso" from the virtual machine in order to get bridged networking. I tried this, and it installed fine after a couple of reboots. I then tried to set up bridged networking again (VirtualBox console to Devices-Network Adaptors...) but it didn't work. What, or what else do I need to do to set up bridged networking in this virtual machine? I appreciate any help that I can get.

    Read the article

  • Investigation: Can different combinations of components effect Dataflow performance?

    - by jamiet
    Introduction The Dataflow task is one of the core components (if not the core component) of SQL Server Integration Services (SSIS) and often the most misunderstood. This is not surprising, its an incredibly complicated beast and we’re abstracted away from that complexity via some boxes that go yellow red or green and that have some lines drawn between them. Example dataflow In this blog post I intend to look under that facade and get into some of the nuts and bolts of the Dataflow Task by investigating how the decisions we make when building our packages can affect performance. I will do this by comparing the performance of three dataflows that all have the same input, all produce the same output, but which all operate slightly differently by way of having different transformation components. I also want to use this blog post to challenge a common held opinion that I see perpetuated over and over again on the SSIS forum. That is, that people assume adding components to a dataflow will be detrimental to overall performance. Its not surprising that people think this –it is intuitive to think that more components means more work- however this is not a view that I share. I have always been of the opinion that there are many factors affecting dataflow duration and the number of components is actually one of the less important ones; having said that I have never proven that assertion and that is one reason for this investigation. I have actually seen evidence that some people think dataflow duration is simply a function of number of rows and number of components. I’ll happily call that one out as a myth even without any investigation!  The Setup I have a 2GB datafile which is a list of 4731904 (~4.7million) customer records with various attributes against them and it contains 2 columns that I am going to use for categorisation: [YearlyIncome] [BirthDate] The data file is a SSIS raw format file which I chose to use because it is the quickest way of getting data into a dataflow and given that I am testing the transformations, not the source or destination adapters, I want to minimise external influences as much as possible. In the test I will split the customers according to month of birth (12 of those) and whether or not their yearly income is above or below 50000 (2 of those); in other words I will be splitting them into 24 discrete categories and in order to do it I shall be using different combinations of SSIS’ Conditional Split and Derived Column transformation components. The 24 datapaths that occur will each input to a rowcount component, again because this is the least resource intensive means of terminating a datapath. The test is being carried out on a Dell XPS Studio laptop with a quad core (8 logical Procs) Intel Core i7 at 1.73GHz and Samsung SSD hard drive. Its running SQL Server 2008 R2 on Windows 7. The Variables Here are the three combinations of components that I am going to test:     One Conditional Split - A single Conditional Split component CSPL Split by Month of Birth and income category that will use expressions on [YearlyIncome] & [BirthDate] to send each row to one of 24 outputs. This next screenshot displays the expression logic in use: Derived Column & Conditional Split - A Derived Column component DER Income Category that adds a new column [IncomeCategory] which will contain one of two possible text values {“LessThan50000”,”GreaterThan50000”} and uses [YearlyIncome] to determine which value each row should get. A Conditional Split component CSPL Split by Month of Birth and Income Category then uses that new column in conjunction with [BirthDate] to determine which of the same 24 outputs to send each row to. Put more simply, I am separating the Conditional Split of #1 into a Derived Column and a Conditional Split. The next screenshots display the expression logic in use: DER Income Category         CSPL Split by Month of Birth and Income Category       Three Conditional Splits - A Conditional Split component that produces two outputs based on [YearlyIncome], one for each Income Category. Each of those outputs will go to a further Conditional Split that splits the input into 12 outputs, one for each month of birth (identical logic in each). In this case then I am separating the single Conditional Split of #1 into three Conditional Split components. The next screenshots display the expression logic in use: CSPL Split by Income Category         CSPL Split by Month of Birth 1& 2       Each of these combinations will provide an input to one of the 24 rowcount components, just the same as before. For illustration here is a screenshot of the dataflow containing three Conditional Split components: As you can these dataflows have a fair bit of work to do and remember that they’re doing that work for 4.7million rows. I will execute each dataflow 10 times and use the average for comparison. I foresee three possible outcomes: The dataflow containing just one Conditional Split (i.e. #1) will be quicker There is no significant difference between any of them One of the two dataflows containing multiple transformation components will be quicker Regardless of which of those outcomes come to pass we will have learnt something and that makes this an interesting test to carry out. Note that I will be executing the dataflows using dtexec.exe rather than hitting F5 within BIDS. The Results and Analysis The table below shows all of the executions, 10 for each dataflow. It also shows the average for each along with a standard deviation. All durations are in seconds. I’m pasting a screenshot because I frankly can’t be bothered with the faffing about needed to make a presentable HTML table. It is plain to see from the average that the dataflow containing three conditional splits is significantly faster, the other two taking 43% and 52% longer respectively. This seems strange though, right? Why does the dataflow containing the most components outperform the other two by such a big margin? The answer is actually quite logical when you put some thought into it and I’ll explain that below. Before progressing, a side note. The standard deviation for the “Three Conditional Splits” dataflow is orders of magnitude smaller – indicating that performance for this dataflow can be predicted with much greater confidence too. The Explanation I refer you to the screenshot above that shows how CSPL Split by Month of Birth and salary category in the first dataflow is setup. Observe that there is a case for each combination of Month Of Date and Income Category – 24 in total. These expressions get evaluated in the order that they appear and hence if we assume that Month of Date and Income Category are uniformly distributed in the dataset we can deduce that the expected number of expression evaluations for each row is 12.5 i.e. 1 (the minimum) + 24 (the maximum) divided by 2 = 12.5. Now take a look at the screenshots for the second dataflow. We are doing one expression evaluation in DER Income Category and we have the same 24 cases in CSPL Split by Month of Birth and Income Category as we had before, only the expression differs slightly. In this case then we have 1 + 12.5 = 13.5 expected evaluations for each row – that would account for the slightly longer average execution time for this dataflow. Now onto the third dataflow, the quick one. CSPL Split by Income Category does a maximum of 2 expression evaluations thus the expected number of evaluations per row is 1.5. CSPL Split by Month of Birth 1 & CSPL Split by Month of Birth 2 both have less work to do than the previous Conditional Split components because they only have 12 cases to test for thus the expected number of expression evaluations is 6.5 There are two of them so total expected number of expression evaluations for this dataflow is 6.5 + 6.5 + 1.5 = 14.5. 14.5 is still more than 12.5 & 13.5 though so why is the third dataflow so much quicker? Simple, the conditional expressions in the first two dataflows have two boolean predicates to evaluate – one for Income Category and one for Month of Birth; the expressions in the Conditional Split in the third dataflow however only have one predicate thus they are doing a lot less work. To sum up, the difference in execution times can be attributed to the difference between: MONTH(BirthDate) == 1 && YearlyIncome <= 50000 and MONTH(BirthDate) == 1 In the first two dataflows YearlyIncome <= 50000 gets evaluated an average of 12.5 times for every row whereas in the third dataflow it is evaluated once and once only. Multiply those 11.5 extra operations by 4.7million rows and you get a significant amount of extra CPU cycles – that’s where our duration difference comes from. The Wrap-up The obvious point here is that adding new components to a dataflow isn’t necessarily going to make it go any slower, moreover you may be able to achieve significant improvements by splitting logic over multiple components rather than one. Performance tuning is all about reducing the amount of work that needs to be done and that doesn’t necessarily mean use less components, indeed sometimes you may be able to reduce workload in ways that aren’t immediately obvious as I think I have proven here. Of course there are many variables in play here and your mileage will most definitely vary. I encourage you to download the package and see if you get similar results – let me know in the comments. The package contains all three dataflows plus a fourth dataflow that will create the 2GB raw file for you (you will also need the [AdventureWorksDW2008] sample database from which to source the data); simply disable all dataflows except the one you want to test before executing the package and remember, execute using dtexec, not within BIDS. If you want to explore dataflow performance tuning in more detail then here are some links you might want to check out: Inequality joins, Asynchronous transformations and Lookups Destination Adapter Comparison Don’t turn the dataflow into a cursor SSIS Dataflow – Designing for performance (webinar) Any comments? Let me know! @Jamiet

    Read the article

  • Sharepoint Document Library not receiving emails

    - by ria
    I have created a sharepoint document library which is email enabled. However when i send email to the designated email address from anywhere, i dont receive the email & attachment in the list. I have done some R&D and i hv found that in order to receive email from anywhere i have to expose my DNS of the sharepoint site to the outside world. Now i dont know whether it applies to the email address designated to me in the Active directory profile as well (my company domain email address). How to test that this email reception is working in the document library? I have tried sending an email from the sharepoint site and it works fine so the SMTP settings are correctly done.

    Read the article

  • BUILD 2013 Session&ndash;What&rsquo;s New In XAML

    - by Tim Murphy
    Originally posted on: http://geekswithblogs.net/tmurphy/archive/2013/06/27/build-2013-sessionndashwhatrsquos-new-in-xaml.aspx If ever there was a session that you felt like your head was going to explode, this one would do it.  Tim Heuer proceeded to try to fit as many of the changes and additions to XAML as he could in one hour. There were a number of improvements that struck me.  The first was the fact that we no longer need to put stack panels in the AppBar in order to add buttons.  This has been changed to a CommandBar which at the very least makes the markup read more cleanly.  Now if they would just bring this same improvement to Windows Phone we would be set. There was a lot of cheering at the beginning of his talk when he showed that there are now date time pickers.  I understand that it makes life easier, but I just couldn’t get that excited. The couple of features that did grab my attention being able to select a group of tags and then add an encapsulating tag such as a StackPanel around them and the fact that they have optimized XAML so that now runs on average 25% faster. I’d go crazy trying to list off all the improvements and new features so be sure to go and review the recording of the session. del.icio.us Tags: BUILD 2013,XAML,Windows 8.1

    Read the article

  • Consuming Hello World pagelet in WebCenter Spaces

    - by astemkov
    Introduction The goal of this exercise is to show you how can you use Hello World pagelet that you just created from your web space. Assumptions Let's assume the following: Pagelet Producer is running on http://pageletserver.company.com:8889/pagelets/ WebCenter is running on http://webcenter.company.com:8888/webcenter/ You created Hello_World pagelet as described here. For our exercise we will need a space created. So let's login into WebCenter Portal and create a space called "myspace" using "Portal Site" template: Registering Pagelet Producer with WebCenter portal In order to use our newly created pagelet from WebCenter Spaces, we first need to register Pagelet Producer: Click "Administraion" link on WebCenter toolbar Open the "Configuration" tab Click on "Services" link on the upper-left corner of the page Click on "Portlet Producers" link on the right hand pane of the screen Click on "Register" button Select "Pagelet Producer" radio button and type Producer Name = "MyPageletProducer" Server URL = http://pageletserver.company.com:8889/pagelets/ Click "Test" button If everything is succesful you will see the following screen: Now click "OK'. Pagelet producer is registered: Inserting Hello World pagelet to WebCenter Space Now let's insert Hello World pagelet into "myspace" page: Let's go back to "myspace", click on the icon in a upper-right corner of the page and select "Edit Page" Click on one of the "Add Content" buttons: Select "Mash-Ups": Select "Pagelet Producers: You will see the MyPageletProducer that we just registered: Click on it. You will see the library "MyLib" that contains our "Hello_World" pagelet. Click on "MyLib" and you will see "Hello_World" pagelet. Click on "Add" button, and then "Close" button. Click "Save" button, and then "Close". Now we see that our "Hello World" pagelet is inserted into "myspace" page:

    Read the article

  • ArchBeat Link-o-Rama for December 11, 2012

    - by Bob Rhubart
    Good To Know - Conflicting View Objects and Shared Entity | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovskis shares his thoughts—and a sample application—dealing with an "interesting ADF behavior" encountered over the weekend. Patching Oracle Exalogic - Updating Linux on the Compute Nodes - Part 1 | Jos Nijhoff Jos Nijhoff launches a series of posts the deal with "patching the operating system on the modified Sun Fire X4170 M2 servers...dubbed compute nodes in Exalogic terminology." Expanding on requestaudit - Tracing who is doing what...and for how long | Kyle Hatlestad "One of the most helpful tracing sections in WebCenter Content (and one that is on by default) is the requestaudit tracing," says Oracle Fusion Middleware A-Team architect Kyle Hatlestad. Get up close and technical in his post. Oracle Data Integrator Presentation from NYOUG Webinar | Gurcan Orhan Oracle ACE Director and award-winning data warehouse architect Gurcan Orhan shares his presentation from the recent NYOUG LI SIG. SOA 11g Technology Adapters – ECID Propagation | Greg Mally "Many SOA Suite 11g deployments include the use of the technology adapters for various activities including integration with FTP, database, and files to name a few," says Oracle Fusion Middleware A-Team member Greg Mally. "Although the integrations with these adapters are easy and feature rich, there can be some challenges from the operations perspective." Greg's post focuses on technical tips for dealing with one of these challenges. Missing Duties for RUP3 upgrade in Fusion Applications Richard from the Oracle Fusion Middleware A-Team explains how to safely apply policy store changes in thirteen easy steps. Thought for the Day "Well over half of the time you spend working on a project (on the order of 70 percent) is spent thinking, and no tool, no matter how advanced, can think for you." — Frederick P. Brooks Source: SoftwareQuotes.com

    Read the article

< Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >