Search Results

Search found 22653 results on 907 pages for 'robert may'.

Page 126/907 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • Getting error 0x000003eb when installing DDK sample printer drivers

    - by Andy
    I've got a development machine, which has been severly abused when it comes to installing and removing printer drivers. I'm now at the stage where I want to install some sample printer drivers from the DDK (WDK), but unfortunately I get the message 'Unable to install printer. Operation could not be completed (error 0x000003eb). So I tried installing the same printer driver built from the DDK in a clean Win 7 x64 VM, and it works, so the only thing I can imagine is that the driver store or driver folder may be slightly corrupt from the many previous printer drivers I had installed. So my question is, is there anyway I can clean my system of old printer drivers / file? Or any repair functionality in windows that may replace the common windows printer drivers?

    Read the article

  • Why I don't use SSIS checkpoint files

    - by jamiet
    In a recent discussion in regard to general ETL best practises the subject of checkpoint files as a means for package restartability came up and I stated that I was dead against using them. For anyone that may care, here is why: Configuring them is distinctly unintuitive (that's a matter of opinion but if you follow the link I'll wager that you will agree) they don't make any allowance for loop iterations they cannot store variables of type Object they are limited in ability. There are many scenarios where you may want to execute certain containers regardless of whether the package is started from a checkpoint file but the current usage model does not allow for this. they are ignored by eventhandlers, which wouldn't be so bad if there were a way to toggle this behaviour in certain scenarios they dont work properly I'll expand on the last bullet point. I have encountered situations where the behaviour for tasks executing concurrently is unpredictable. That is, sometimes the completion of a task that executes concurrently with a failed/failing task will make it into the checkpoint file and sometimes it won't. This is near-impossible to reproduce but it does happen as my good friend John Welch will hopefully concur (if he is reading). Is anyone out there making successful use of checkpoint files within SSIS? I would be interested in knowing about that if so. @Jamiet

    Read the article

  • How to send a popup message to unknow computer connected to my WLAN?

    - by Leandro
    Is there any way to send a popup message from a Linux systen to a "random" laptop/tablet/mobile linked to my Wireless network ? For example, if I let my WLAN open and I see a not recognized computer connected to it, is there anyway to send to that device a message ? On the other hand, if I am connected to someone else open network and they may or may not be aware that their network is open, can I send them a message warning that I am accessing their network? Probably for a completely "random" device the answer should be no. But if we restrict to laptops with Win7 or Linux SO is there any service running by default on such systems that allows one to send such popup messages ? PS: I have no practical motivation for this question. This is only a curiosity.

    Read the article

  • Partitions on Linux and their CHS dependance

    - by FractalizeR
    Hello. Recently I came into a problem with partitioning WD20EARS disk (with 4k sectors). I needed partitions to be aligned correctly so I just used parted in "unit s" mode and started all partitions at mod8 sector (drive itself reports, that sector is 512b) and ended all of them at mod8-1 sector. But then I thought, that may be I should take into account also the cylinder boundaries (I've seen some posts on the net where fdisk complains about partitions not to start/end on cylinder boundary). And then... I thought, that if drive lies about it's sector size, may be it's lying about the whole geometry? Should I care about partitions to be aligned against cylinder boundaries? If so, how do I find these boundaries? I guess each drive model can contain different sectors per track/cylinder... Or sector alignment is all I should take care of?

    Read the article

  • SQL SERVER – ORDER BY ColumnName vs ORDER BY ColumnNumber

    - by pinaldave
    I strongly favor ORDER BY ColumnName. I read one of the blog post where blogger compared the performance of the two SELECT statement and come to conclusion that ColumnNumber has no harm to use it. Let us understand the point made by first that there is no performance difference. Run following two scripts together: USE AdventureWorks GO -- ColumnName (Recommended) SELECT * FROM HumanResources.Department ORDER BY GroupName, Name GO -- ColumnNumber (Strongly Not Recommended) SELECT * FROM HumanResources.Department ORDER BY 3,2 GO If you look at the result and see the execution plan you will see that both of the query will take the same amount of the time. However, this was not the point of this blog post. It is not good enough to stop here. We need to understand the advantages and disadvantages of both the methods. Case 1: When Not Using * and Columns are Re-ordered USE AdventureWorks GO -- ColumnName (Recommended) SELECT GroupName, Name, ModifiedDate, DepartmentID FROM HumanResources.Department ORDER BY GroupName, Name GO -- ColumnNumber (Strongly Not Recommended) SELECT GroupName, Name, ModifiedDate, DepartmentID FROM HumanResources.Department ORDER BY 3,2 GO Case 2: When someone changes the schema of the table affecting column order I will let you recreate the example for the same. If your development server where your schema is different than the production server, if you use ColumnNumber, you will get different results on the production server. Summary: When you develop the query it may not be issue but as time passes by and new columns are added to the SELECT statement or original table is re-ordered if you have used ColumnNumber it may possible that your query will start giving you unexpected results and incorrect ORDER BY. One should note that the usage of ORDER BY ColumnName vs ORDER BY ColumnNumber should not be done based on performance but usability and scalability. It is always recommended to use proper ORDER BY clause with ColumnName to avoid any confusion. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Announcement: Oracle Database Appliance 2.4 patch update now available

    - by uwes
    The Oracle Database Appliance 2.4 patch is now available from My Oracle Support (MOS).  If you search for the Oracle Database Appliance 2.4.0.0.0 Kit under Patches it will display the newly uploaded bundles. The patch highlights include: Normal redundancy (double-mirroring) option providing 6TB of usable storage Enhanced Diagnostics - Trace File Analyzer and ODACHK Also, if you review the README, you may see content that says:        "The grid infrastructure and database patching, both are rolling upgradable. During our patching, we patch the node 1 first and when completed, we patch the node 2." I would like to clarify that the 'infrastructure' updates (OS, Firmware, ILOM, etc) will require a  short downtime of the ODA while it is applied.  When you update the grid infrastructure (--gi), the appliance manager verifies that the infrastructure was updated so you cannot just patch the GI without first updating the infrastructure. The high level update patch steps include (but not limited to): Download patch update to your ODA The --infra (infrastructure) is updated and ODA Databases are down and the ODA is/may be rebooted ODA and GI/Databases are restarted Issue the command to update the Grid Infrastructure/databases (The order of the steps are completed automatically and you cannot control when the nodes are brought up and down during the patching) Node 1 -- shutdown databases and GI Node 1 -- patch GI/database Node 1 -- bring up databases and GI Node 2 -- shutdown databases and GI Node 2 -- patch GI/database Node 2 -- bring up databases and GI A replay from Friday's with Sohan on the 2.4 release can be found here.  The PDF of the presentation is here. The Data Sheet, WP, and 2.4 Configurator are available on the ODA OTN site.

    Read the article

  • Is my use case diagram correct?

    - by Dummy Derp
    NOTE: I am self studying UML so I have nobody to verify my diagrams and hence I am posting here, so please bear with me. This is the problem I got from some PDF available on Google that simply had the following problem statement: Problem Statement: A library contains books and journals. The task is to develop a computer system for borrowing books. In order to borrow a book the borrower must be a member of the library. There is a limit on the number of books that can be borrowed by each member of the library. The library may have several copies of a given book. It is possible to reserve a book. Some books are for short term loans only. Other books may be borrowed for 3 weeks. Users can extend the loans. 1. Draw a use case diagram for a library. 2. Give a use case description for two use cases: • Borrow copy of book • Extend loan Diagram: Use case description: 1. Borrow a copy of the book: If the person wishes to borrow a book from Derpville Public Library, he/she must be a member of the library in which case they will be allowed to issue a certain number of books. If the person is not a member, the book will not be issued to them for taking away, rather they will have to sit and read in the library. 2. Extending loan: Some books will be lent for 3 weeks while others will be lent for more than 3 weeks in which case the person borrowing has to come to the library and get the date extended. There is a limit on how much the user can extend the date of a particular book.

    Read the article

  • Ubuntu 13.10 Installing MariaDB when Apt reports MariaDB has unmet dependencies or broken packages

    - by Ecaz
    I have tried everything to install MariaDB on this clean Ubuntu installation but I keep getting this error, Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: mariadb-server : Depends: mariadb-server-5.5 (= 5.5.33a+maria-1~saucy) but it is not going to be installed E: Unable to correct problems, you have held broken packages. I have followed this guide to try and install it, http://www.unixmen.com/install-lemp-server-nginx-mysql-mariadb-php-ubuntu-13-10-server/ And I have also followed the "official" guide on the MariaDB downloads page for 13.10 https://downloads.mariadb.org/mariadb/repositories/ But nothing seems to be working. Edit 1 I have tried both How do I resolve unmet dependencies? and How to install MariaDB? but it still gives me the error I posted above. It's a fresh Ubuntu install with hardly anything installed. Edit 2 All the check boxes are ticket in Updates. I ran: sudo apt-get update && sudo apt-get -f install mariadb-server-5.5"=5.5.33a+maria-1~saucy" And it gave me this error: The following packages have unmet dependencies: mariadb-server-5.5 : Depends: mariadb-client-5.5 (>= 5.5.33a+maria-1~saucy) but it is not going to be installed Depends: mariadb-server-core-5.5 (>= 5.5.33a+maria-1~saucy) but it is not going to be installed E: Unable to correct problems, you have held broken packages.

    Read the article

  • Which algorithm used in Advance Wars type turn based games

    - by Jan de Lange
    Has anyone tried to develop, or know of an algorithm such as used in a typical turn based game like Advance Wars, where the number of objects and the number of moves per object may be too large to search through up to a reasonable depth like one would do in a game with a smaller search base like chess? There is some path-finding needed to to engage into combat, harvest, or move to an object, so that in the next move such actions are possible. With this you can build a search tree for each item, resulting in a large tree for all items. With a cost function one can determine the best moves. Then the board flips over to the player role (min/max) and the computer searches the best player move, and flips back etc. upto a number of cycles deep. Finally it has found the best move and now it's the players turn. But he may be asleep by now... So how is this done in practice? I have found several good sources on A*, DFS, BFS, evaluation / cost functions etc. But as of yet I do not see how I can put it all together.

    Read the article

  • Implementing logical paging with RadDataPager for WPF and Silverlight

    Following the great series about RadDataPager started by Rossen and Pavel, today I’m going to show you how to implement logical paging. We are going to implement alphabetical paging similar to this ASP.NET AJAX Grid Demo. As you may already know, the key to the heart of the RadDataPager is the IPagedCollectionView interface. You can create your own implementations of this interface and implement any custom logic for paging you want. This is exactly what we are going to do in this article. Introducing PagedCollectionViewBase and LogicallyPagedCollectionView<T> If you have looked at IPagedCollectionView interface you may have found out that it is not a trivial interface to implement. It has 5 methods, 6 properties and 2 events – total number of members to implement 13. To ease any further implementation of the paging interface we are going to create a base class that will have most of the ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • umount bind of stale NFS

    - by Paul Eisner
    i've got a problem removing mounts created with mount -o bind from a locally mounted NFS folder. Assume the following mount structure: NFS mounted directory: $ mount -o rw,soft,tcp,intr,timeo=10,retrans=2,retry=1 \ 10.20.0.1:/srv/source /srv/nfs-source Bound directory: $ mount -o bind /srv/nfs-source/sub1 /srv/bind-target/sub1 Which results in this mount map $ mount /dev/sda1 on / type ext3 (rw,errors=remount-ro) # ... 10.20.0.1:/srv/source on /srv/nfs-source type nfs (rw,soft,tcp,intr,timeo=10,retrans=2,retry=1,addr=10.20.0.100) /srv/nfs-source/sub1 on /srv/bind-target/sub1 type none (rw,bind) If the server (10.20.0.1) goes down (eg ifdown eth0), the handles become stale, which is expected. I can now un-mount the NFS mount with force $ umount -f /srv/nfs-source This takes some seconds, but works without any problems. However, i cannot un-mount the bound directory in /srv/bind-target/sub1. The forced umount results in: $ umount -f /srv/bind-target/sub1 umount2: Stale NFS file handle umount: /srv/bind-target/sub1: Stale NFS file handle umount2: Stale NFS file handle Here is a trace http://pastebin.com/ipvvrVmB I've tried umounting the sub-directories beforehand, find any processes accessing anything within the NFS or bind mounts (there are none). lsof also complains: $ lsof -n lsof: WARNING: can't stat() nfs file system /srv/nfs-source Output information may be incomplete. lsof: WARNING: can't stat() nfs file system /srv/bind-target/sub1 (deleted) Output information may be incomplete. lsof: WARNING: can't stat() nfs file system /srv/bind-target/ Output information may be incomplete. I've tried with recent stable Linux kernels 3.2.17, 3.2.19 and 3.3.8 (cannot use 3.4.x, cause need the grsecurity patch, which is not, yet, supported - grsecurity is not patched in in the tests above!). My nfs-utils are version 1.2.2 (debian stable). Does anybody have an idea how i can either: force the un-mount some other way? (any dirty trick is welcome, data loss or damage neglible at this point) use something else instead of mount -o bind? (cannot use soft links, cause mounted directories will be used in chroot; bindfs via FUSE is far to slow to be an option) Thanks, Paul Update 1 With 2.6.32.59 the umount of the (stale) sub-mounts work just fine. It seems to be a kernel regression bug. The above tests where with NFSv3. Additional tests with NFSv4 showed no change. Update 2 We have tested now multiple 2.6 and 3.x kernels and are now sure, that this was introduced in 3.0.x. We will fille a bug report, hopefully they figure it out.

    Read the article

  • Data structure for pattern matching.

    - by alvonellos
    Let's say you have an input file with many entries like these: date, ticker, open, high, low, close, <and some other values> And you want to execute a pattern matching routine on the entries(rows) in that file, using a candlestick pattern, for example. (See, Doji) And that pattern can appear on any uniform time interval (let t = 1s, 5s, 10s, 1d, 7d, 2w, 2y, and so on...). Say a pattern matching routine can take an arbitrary number of rows to perform an analysis and contain an arbitrary number of subpatterns. In other words, some patterns may require 4 entries to operate on. Say also that the routine (may) later have to find and classify extrema (local and global maxima and minima as well as inflection points) for the ticker over a closed interval, for example, you could say that a cubic function (x^3) has the extrema on the interval [-1, 1]. (See link) What would be the most natural choice in terms of a data structure? What about an interface that conforms a Ticker object containing one row of data to a collection of Ticker so that an arbitrary pattern can be applied to the data. What's the first thing that comes to mind? I chose a doubly-linked circular linked list that has the following methods: push_front() push_back() pop_front() pop_back() [] //overloaded, can be used with negative parameters But that data structure seems very clumsy, since so much pushing and popping is going on, I have to make a deep copy of the data structure before running an analysis on it. So, I don't know if I made my question very clear -- but the main points are: What kind of data structures should be considered when analyzing sequential data points to conform to a pattern that does NOT require random access? What kind of data structures should be considered when classifying extrema of a set of data points?

    Read the article

  • Row Number Transformation

    The Row Number Transformation calculates a row number for each row, and adds this as a new output column to the data flow. The column number is a sequential number, based on a seed value. Each row receives the next number in the sequence, based on the defined increment value. The final row number can be stored in a variable for later analysis, and can be used as part of a process to validate the integrity of the data movement. The Row Number transform has a variety of uses, such as generating surrogate keys, or as the basis for a data partitioning scheme when combined with the Conditional Split transformation. Properties Property Data Type Description Seed Int32 The first row number or seed value. Increment Int32 The value added to the previous row number to make the next row number. OutputVariable String The name of the variable into which the final row number is written post execution. (Optional). The three properties have been configured to support expressions, or they can set directly in the normal manner. Expressions on components are only visible on the hosting Data Flow task, not at the individual component level. Sometimes the data type of the property is incorrectly set when the properties are created, see the Troubleshooting section below for details on how to fix this. Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. For 2005/2008 Only - Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Row Number transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations, and this component requires a minimum of SQL Server 2005 Service Pack 1. Downloads The Row Number Transformation  is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Row Number Transformation for SQL Server 2005 Row Number Transformation for SQL Server 2008 Row Number Transformation for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.6 - SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2012) SQL Server 2008 Version 2.0.0.5 - SQL Server 2008 release. (15 Oct 2008) SQL Server 2005 Version 1.2.0.34 – Updated installer. (25 Jun 2008) Version 1.2.0.7 - SQL Server 2005 RTM Refresh. SP1 Compatibility Testing. Added the ability to reuse an existing column to hold the generated row number, as an alternative to the default of adding a new column to the output. (18 Jun 2006) Version 1.2.0.7 - SQL Server 2005 RTM Refresh. SP1 Compatibility Testing. Added the ability to reuse an existing column to hold the generated row number, as an alternative to the default of adding a new column to the output. (18 Jun 2006) Version 1.0.0.0 - Public Release for SQL Server 2005 IDW 15 June CTP (29 Aug 2005) Screenshot Code Sample The following code sample demonstrates using the Data Generator Source and Row Number Transformation programmatically in a very simple package. Package package = new Package(); package.Name = "Data Generator & Row Number"; // Add the Data Flow Task Executable taskExecutable = package.Executables.Add("STOCK:PipelineTask"); // Get the task host wrapper, and the Data Flow task TaskHost taskHost = taskExecutable as TaskHost; MainPipe dataFlowTask = (MainPipe)taskHost.InnerObject; // Add Data Generator Source IDTSComponentMetaData100 componentSource = dataFlowTask.ComponentMetaDataCollection.New(); componentSource.Name = "Data Generator"; componentSource.ComponentClassID = "Konesans.Dts.Pipeline.DataGenerator.DataGenerator, Konesans.Dts.Pipeline.DataGenerator, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b"; CManagedComponentWrapper instanceSource = componentSource.Instantiate(); instanceSource.ProvideComponentProperties(); instanceSource.SetComponentProperty("RowCount", 10000); // Add Row Number Tx IDTSComponentMetaData100 componentRowNumber = dataFlowTask.ComponentMetaDataCollection.New(); componentRowNumber.Name = "FlatFileDestination"; componentRowNumber.ComponentClassID = "Konesans.Dts.Pipeline.RowNumberTransform.RowNumberTransform, Konesans.Dts.Pipeline.RowNumberTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b"; CManagedComponentWrapper instanceRowNumber = componentRowNumber.Instantiate(); instanceRowNumber.ProvideComponentProperties(); instanceRowNumber.SetComponentProperty("Increment", 10); // Connect the two components together IDTSPath100 path = dataFlowTask.PathCollection.New(); path.AttachPathAndPropagateNotifications(componentSource.OutputCollection[0], componentRowNumber.InputCollection[0]); #if DEBUG // Save package to disk, DEBUG only new Application().SaveToXml(String.Format(@"C:\Temp\{0}.dtsx", package.Name), package, null); #endif package.Execute(); foreach (DtsError error in package.Errors) { Console.WriteLine("ErrorCode : {0}", error.ErrorCode); Console.WriteLine(" SubComponent : {0}", error.SubComponent); Console.WriteLine(" Description : {0}", error.Description); } package.Dispose(); Troubleshooting Make sure you have downloaded the version that matches your version of SQL Server. We offer separate downloads for SQL Server 2005, SQL Server 2008 and SQL Server 2012. If you get an error when you try and use the component along the lines of The component could not be added to the Data Flow task. Please verify that this component is properly installed.  ... The data flow object "Konesans ..." is not installed correctly on this computer, this usually indicates that the internal cache of SSIS components needs to be updated. This is held by the SSIS service, so you need restart the the SQL Server Integration Services service. You can do this from the Services applet in Control Panel or Administrative Tools in Windows. You can also restart the computer if you prefer. You may also need to restart any current instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Once installation is complete you need to manually add the task to the toolbox before you will see it and to be able add it to packages - How do I install a task or transform component? Please also make sure you have installed a minimum of SP1 for SQL 2005. The IDtsPipelineEnvironmentService was added in SQL Server 2005 Service Pack 1 (SP1) (See  http://support.microsoft.com/kb/916940). If you get an error Could not load type 'Microsoft.SqlServer.Dts.Design.IDtsPipelineEnvironmentService' from assembly 'Microsoft.SqlServer.Dts.Design, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91'. when trying to open the user interface, it implies that your development machine has not had SP1 applied. Very occasionally we get a problem to do with the properties not being created with the correct data type. Since there is no way to programmatically to define the data type of a pipeline component property, it can only infer it. Whilst we set an integer value as we create the property, sometimes SSIS decides to define it is a decimal. This is often highlighted when you use a property expression against the property and get an error similar to Cannot convert System.Int32 to System.Decimal. Unfortunately this is beyond our control and there appears to be no pattern as to when this happens. If you do have more information we would be happy to hear it. To fix this issue you can manually edit the package file. In Visual Studio right click the package file from the Solution Explorer and select View Code, which will open the package as raw XML. You can now search for the properties by name or the component name. You can then change the incorrect property data types highlighted below from Decimal to Int32. <component id="37" name="Row Number Transformation" componentClassID="{BF01D463-7089-41EE-8F05-0A6DC17CE633}" … >     <properties>         <property id="38" name="UserComponentTypeName" …>         <property id="41" name="Seed" dataType="System.Int32" ...>10</property>         <property id="42" name="Increment" dataType="System.Decimal" ...>10</property>         ... If you are still having issues then contact us, but please provide as much detail as possible about error, as well as which version of the the task you are using and details of the SSIS tools installed.

    Read the article

  • BUILDROOT files during RPM generation

    - by khmarbaise
    Currently i have the following spec file to create a RPM. The spec file is generated by maven plugin to produce a RPM out of it. The question is: will i find files which are mentioned in the spec file after the rpm generation inside the BUILDROOT/SPECS/SOURCES/SRPMS structure? %define _unpackaged_files_terminate_build 0 Name: rpm-1 Version: 1.0 Release: 1 Summary: rpm-1 License: 2009 my org Distribution: My App Vendor: my org URL: www.my.org Group: Application/Collectors Packager: my org Provides: project Requires: /bin/sh Requires: jre >= 1.5 Requires: BASE_PACKAGE PreReq: dependency Obsoletes: project autoprov: yes autoreq: yes BuildRoot: /home/build/.jenkins/jobs/rpm-maven-plugin/workspace/target/it/rpm-1/target/rpm/rpm-1/buildroot %description %install if [ -e $RPM_BUILD_ROOT ]; then mv /home/build/.jenkins/jobs/rpm-maven-plugin/workspace/target/it/rpm-1/target/rpm/rpm-1/tmp-buildroot/* $RPM_BUILD_ROOT else mv /home/build/.jenkins/jobs/rpm-maven-plugin/workspace/target/it/rpm-1/target/rpm/rpm-1/tmp-buildroot $RPM_BUILD_ROOT fi ln -s /usr/myusr/app $RPM_BUILD_ROOT/usr/myusr/app2 ln -s /tmp/myapp/somefile $RPM_BUILD_ROOT/tmp/myapp/somefile2 ln -s name.sh $RPM_BUILD_ROOT/usr/myusr/app/bin/oldname.sh %files %defattr(-,myuser,mygroup,-) %dir "/usr/myusr/app" "/usr/myusr/app2" "/tmp/myapp/somefile" "/tmp/myapp/somefile2" "/usr/myusr/app/lib" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/start.sh" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/filter-version.txt" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/name.sh" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/name-Linux.sh" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/filter.txt" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/oldname.sh" %dir "/usr/myusr/app/conf" %config "/usr/myusr/app/conf/log4j.xml" "/usr/myusr/app/conf/log4j.xml.deliver" %prep echo "hello from prepare" %pre -p /bin/sh #!/bin/sh if [ -s "/etc/init.d/myapp" ] then /etc/init.d/myapp stop rm /etc/init.d/myapp fi %post #!/bin/sh #create soft link script to services directory ln -s /usr/myusr/app/bin/start.sh /etc/init.d/myapp chmod 555 /etc/init.d/myapp %preun #!/bin/sh #the argument being passed in indicates how many versions will exist #during an upgrade, this value will be 1, in which case we do not want to stop #the service since the new version will be running once this script is called #during an uninstall, the value will be 0, in which case we do want to stop #the service and remove the /etc/init.d script. if [ "$1" = "0" ] then if [ -s "/etc/init.d/myapp" ] then /etc/init.d/myapp stop rm /etc/init.d/myapp fi fi; %triggerin -- dependency, dependency1 echo "hello from install" %changelog * Tue May 23 2000 Vincent Danen <[email protected]> 0.27.2-2mdk -update BuildPreReq to include rep-gtk and rep-gtkgnome * Thu May 11 2000 Vincent Danen <[email protected]> 0.27.2-1mdk -0.27.2 * Thu May 11 2000 Vincent Danen <[email protected]> 0.27.1-2mdk -added BuildPreReq -change name from Sawmill to Sawfish The problem i found is that the files (filter.txt in particular) after the generation process on a Ubuntu system but not on SuSE system. Which might be caused by different rpm versions ? Currently we have an integration test which fails based on the non existing of the file (filter.txt under a buildroot folder?)

    Read the article

  • Why does TDD work?

    - by CesarGon
    Test-driven development (TDD) is big these days. I often see it recommended as a solution for a wide range of problems here in Programmers SE and other venues. I wonder why it works. From an engineering point of view, it puzzles me for two reasons: The "write test + refactor till pass" approach looks incredibly anti-engineering. If civil engineers used that approach for bridge construction, or car designers for their cars, for example, they would be reshaping their bridges or cars at very high cost, and the result would be a patched-up mess with no well thought-out architecture. The "refactor till pass" guideline is often taken as a mandate to forget architectural design and do whatever is necessary to comply with the test; in other words, the test, rather than the user, sets the requirement. In this situation, how can we guarantee good "ilities" in the outcomes, i.e. a final result that is not only correct but also extensible, robust, easy to use, reliable, safe, secure, etc.? This is what architecture usually does. Testing cannot guarantee that a system works; it can only show that it doesn't. In other words, testing may show you that a system contains defects if it fails a test, but a system that passes all tests is not safer than a system that fails them. Test coverage, test quality and other factors are crucial here. The false safe feelings that an "all green" outcomes produces to many people has been reported in civil and aerospace industries as extremely dangerous, because it may be interepreted as "the system is fine", when it really means "the system is as good as our testing strategy". Often, the testing strategy is not checked. Or, who tests the tests? I would like to see answers containing reasons why TDD in software engineering is a good practice, and why the issues that I have explained above are not relevant (or not relevant enough) in the case of software. Thank you.

    Read the article

  • Persistent "held broken packages" error

    - by stoplan
    sudo apt-get update && sudo apt-get install netflix-desktop gives the error The following packages have unmet dependencies: netflix-desktop : Depends: wine-browser-installer but it is not going to be installed E: Unable to correct problems, you have held broken packages. but dpkg --get-selections | grep hold shows nothing. I'm running 12.04 64-bit. I've followed the directions in How do I resolve unmet dependencies?: Confirmed that main, universe, restricted and multiverse software sources are enabled sudo apt-get clean sudo apt-get -f install (returning '0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.') sudo dpkg --configure -a sudo apt-get -f install (again '0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.) sudo apt-get -u dist-upgrade (0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.) used Y PPA Manager to check for duplicate ppas (none found) [Edit] I have had the same error with other packages. Here's the output requested by Alaa: sudo apt-get install wine-browser-installer Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: wine-browser-installer : Depends: wine-compholio (= 1.5.19~precise1) E: Unable to correct problems, you have held broken packages.

    Read the article

  • Can't install Apache 2.2.22 on Ubuntu 13.10

    - by B18C1
    My work environment requires Apache 2.2.22 instead of the latest version of 2.4. My machine is currently running Ubuntu 13.10. When I use Synaptic or apt-get it will not allow me to choose an older version of Apache than 2.4. So my question is, how can I force an install of Apache 2.2.22 on Ubuntu 13.10 using Synaptic or apt-get. When I do try to specify the version I get the following: sudo apt-get install apache2=2.2.22-1ubuntu1 [sudo] password for b18c1: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: apache2 : Depends: apache2-mpm-worker (= 2.2.22-1ubuntu1) but it is not going to installed or apache2-mpm-prefork (= 2.2.22-1ubuntu1) but it is not going to be installed or apache2-mpm-event (= 2.2.22-1ubuntu1) but it is not going to be installed or apache2-mpm-itk (= 2.2.22-1ubuntu1) but it is not going to be installed Depends: apache2.2-common (= 2.2.22-1ubuntu1) but it is not going to be installed E: Unable to correct problems, you have held broken packages.

    Read the article

  • Docking Station Disabling/Enabling Network Connection

    - by bryan_cook
    Whenever I dock my laptop onto my docking station, Windows disables my Wireless Network Connection and enables my Wired Network Connection. Whenever I undock my laptop, Windows disables my Wired Network Connection and enables my Wireless Network Connection. Is there a way to disable this feature? After Windows performs the automatic disabling/enabling, I run into errors when I try to disable the now-enabled connection, specifically "It is not possible to disable the connection at this time. This connection may be using one or more protocols that do not support Plug-and-Play, or it may have been initiated by another user or the system account." I'm assuming the latter portion of the error is true ... my network connections are being enabled/disabled under a system account. Regardless, I'd just like to disable the feature altogether. For reference, I'm working with Windows XP Professional on a Dell Latitude D630. Thanks in advance!

    Read the article

  • Why Oracle Data Integrator for Big Data?

    - by Mala Narasimharajan
    Big Data is everywhere these days - but what exactly is it? It’s data that comes from a multitude of sources – not only structured data, but unstructured data as well.  The sheer volume of data is mindboggling – here are a few examples of big data: climate information collected from sensors, social media information, digital pictures, log files, online video files, medical records or online transaction records.  These are just a few examples of what constitutes big data.   Embedded in big data is tremendous value and being able to manipulate, load, transform and analyze big data is key to enhancing productivity and competitiveness.  The value of big data lies in its propensity for greater in-depth analysis and data segmentation -- in turn giving companies detailed information on product performance, customer preferences and inventory.  Furthermore, by being able to store and create more data in digital form, “big data can unlock significant value by making information transparent and usable at much higher frequency." (McKinsey Global Institute, May 2011) Oracle's flagship product for bulk data movement and transformation, Oracle Data Integrator, is a critical component of Oracle’s Big Data strategy. ODI provides automation, bulk loading, and validation and transformation capabilities for Big Data while minimizing the complexities of using Hadoop.  Specifically, the advantages of ODI in a Big Data scenario are due to pre-built Knowledge Modules that drive processing in Hadoop. This leverages the graphical UI to load and unload data from Hadoop, perform data validations and create mapping expressions for transformations.  The Knowledge Modules provide a key jump-start and eliminate a significant amount of Hadoop development.  Using Oracle Data Integrator together with Oracle Big Data Connectors, you can simplify the complexities of mapping, accessing, and loading big data (via NoSQL or HDFS) but also correlating your enterprise data – this correlation may require integrating across heterogeneous and standards-based environments, connecting to Oracle Exadata, or sourcing via a big data platform such as Oracle Big Data Appliance. To learn more about Oracle Data Integration and Big Data, download our resource kit to see the latest in whitepapers, webinars, downloads, and more… or go to our website on www.oracle.com/bigdata

    Read the article

  • Hiding the Flash Message After a Time Delay

    - by Madhan ayyasamy
    Hi Friends,The flash hash is a great way to provide feedback to your users.Here is a quick tip for hiding the flash message after a period of time if you don’t want to leave it lingering around.First, add this line to the head of your layout to ensure the prototype and script.aculo.us javascript libraries are loaded:Next, add the following to either your layout (recommended), your view templates or a partial depending on your needs. I usually add this to a partial and include the partial in my layouts. "flash", :id = flash_type % "text/javascript" do % setTimeout("new Effect.Fade('');", 10000); This will wrap the flash message in a div with class=‘flash’ and id=‘error’, ‘notice’ or ‘warn’ depending on the flash key specified.The value ‘10000’ is the time in milliseconds before the flash will disappear. In this case, 10 seconds.This function looks pretty good and little javascript stunts like this can help make your site feel more professional. It’s also worth bearing in mind though, not everybody can see well or read as quickly as others so this may not be suitable for every application.Update:As Mitchell has pointed out (see comments below), it may be better to set the flash_type as the div class rather than it’s id. If there is the possibility that you’ll be showing more than one flash message per page, setting the flash_type as the div id will result in your HTML/XHTML code becoming invalid because the unique intentifier will be used more than once per page.Here is a slightly more complex version of the method shown above that will hide all divs with class ‘flash’ after a time delay, achieving the same effect and also ensuring your code stays valid with more than one flash message! "flash #{flash_type}" % "text/javascript" do % setTimeout("$$('div.flash').each(function(flash){ flash.hide();})", 10000); In this example, the div id is not set at all. Instead, each flash div will have class “div” and also class of the type of flash message (“error”, “warning” etc.).Have a Great Day..:)

    Read the article

  • Functional Methods on Collections

    - by GlenPeterson
    I'm learning Scala and am a little bewildered by all the methods (higher-order functions) available on the collections. Which ones produce more results than the original collection, which ones produce less, and which are most appropriate for a given problem? Though I'm studying Scala, I think this would pertain to most modern functional languages (Clojure, Haskell) and also to Java 8 which introduces these methods on Java collections. Specifically, right now I'm wondering about map with filter vs. fold/reduce. I was delighted that using foldRight() can yield the same result as a map(...).filter(...) with only one traversal of the underlying collection. But a friend pointed out that foldRight() may force sequential processing while map() is friendlier to being processed by multiple processors in parallel. Maybe this is why mapReduce() is so popular? More generally, I'm still sometimes surprised when I chain several of these methods together to get back a List(List()) or to pass a List(List()) and get back just a List(). For instance, when would I use: collection.map(a => a.map(b => ...)) vs. collection.map(a => ...).map(b => ...) The for/yield command does nothing to help this confusion. Am I asking about the difference between a "fold" and "unfold" operation? Am I trying to jam too many questions into one? I think there may be an underlying concept that, if I understood it, might answer all these questions, or at least tie the answers together.

    Read the article

  • Low-level GPU code and Shader Compilation

    - by ktodisco
    Bear with me, because I will raise several questions at once. I still feel, though, that overall this can be treated as one question that may be answered succinctly. I recently dove into solidifying my understanding of the assembly language, low-level memory operations, CPU structure, and program optimizations. This also sparked my interest in how higher-level shading languages, GLSL and HLSL in particular, are compiled and optimized, as well as what formats they are reduced to before machine code is generated (assuming they are not converted directly into machine code). After a bit of research into this, the best resource I've found is this presentation from ATI about the compilation of and optimizations for HLSL. I also found sample ARB assembly code. This sort of addressed my original curiosity, but it raised several other questions. The assembler code in the ATI presentation seems like it contains instructions specifically targeted for the GPU, but is this merely a hypothetical example created for the purpose of conceptual understanding, or is this code really generated during shader compilation? If so, is it possible to inspect it, or even write it in place of the higher-level syntax? My initial searches for an answer to the last question tell me that this may be disallowed, but I have not dug too deep yet. Also, along the same lines, are GLSL shader programs compiled into ARB assembly code before machine code is generated, and is it possible to write direct ARB assembly? Lastly, and perhaps what I am most interested in finding out: are there comprehensive resources on shader compilation and low-level GPU code? I have been unable to find any thus far. I ask simply because I am curious :)

    Read the article

  • Encrypted home breaks on login

    - by berkes
    My home is encrypted, which breaks the login. Gnome and other services try to find all sorts of .files, write to them, read from them and so on. E.g. .ICEauthority. They are not found (yet) because at that moment the home is still encrypted. I do not have automatic login set, since that has known issues with encrypted home in Ubuntu. When I go trough the following steps, there is no problem: boot up the system. [ctr][alt][F1], login. run ecryptfs-mount-private [ctr][alt][F7], done. Can now login. I may have some setting wrong, but have no idea where. I suspect ecryptfs-mount-private should be ran earlier in bootstrap, but do not know how to make it so. Some issues that may cause trouble: I have a fingerprint reader, it works for login and PAM. I have three keyrings in seahorse, containing passwords from old machines (backups). Not just one. Suggestion was that the PAM settings are wrong, so here are the relevant parts from /etc/pam.d/common-auth. # here are the per-package modules (the "Primary" block) auth [success=3 default=ignore] pam_fprintd.so auth [success=2 default=ignore] pam_unix.so nullok_secure try_first_pass auth [success=1 default=ignore] pam_winbind.so krb5_auth krb5_ccache_type=FILE cached_login try_first_pass # here's the fallback if no module succeeds auth requisite pam_deny.so # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around auth required pam_permit.so # and here are more per-package modules (the "Additional" block) auth optional pam_ecryptfs.so unwrap # end of pam-auth-update config I am not sure about how this configuration works, but ut seems that maybe the*optional* in auth optional pam_ecryptfs.so unwrap is causing the ecryptfs to be ignored?

    Read the article

  • What type of career path / jobs for a developer to have best work life balance?

    - by programmx10
    I know some people may look down on a question like this but I've been thinking lately a lot about what direction I can take my career to have a good work life balance, since I have been working for a startup where hours tend to drag on, etc and I find it often drains the life out of me. I have been going to interviews and some other companies are also startups / new companies and seem to have a similar attitude about working long hours. Maybe its the technologies I use, the type of development, I don't know but I'm curious if anyone can offer advice on what a path is to be a programmer / developer but work for a company that respects a regular work week and would only rarely find the need to move past this. I realize this won't lead to being the highest paid in my field but I'm ok with that and feel the tradeoff would be worth it as it would also give me time for my own projects, etc. I know some people may say this is too general but I believe it is a programmer specific question because I believe there tends to be a higher than average rate of working overtime, etc and people working in "startup" venture situations than in many other fields and there is definitely a mindset among a lot of people in the field of working long hours that doesn't exist in every industry.

    Read the article

  • Metacity malfunction preventing custom Gnome session from launching?

    - by QuietThud
    When I try to run Metacity in Ubuntu2D(12.04), I get the following message: alisa@ubuntu:~$ metacity Window manager warning: Screen 0 on display ":2.0" already has a window manager; try using the --replace option to replace the current window manager. I get the same message when running Compiz from the command line in 3D (it opens fine through the GUI (same thing for AWN)). I understand that these should be the default managers for the respective sessions. I'm trying to create a custom Gnome session using the following instructions: unity launcher-free session. Here is what I've put into my .session file: [GNOME Session] Name=Custom Unity2D Session RequiredComponents=gnome-settings-daemon; RequiredProviders=windowmanager;panel; DefaultProvider-windowmanager=metacity DefaultProvider-panel=unity-2d-panel FallbackSession=ubuntu-2d DesktopName=GNOME Since I'm having problems identifying my default, and the code refers to Metacity, I figured this may be relevant to my inability to load the custom session (it shows up on my login screen, but won't launch). I tried specifying Metacity as my default manager by adding exec metacity to the .xinitrc file, and I tried running metacity --replace, but neither worked. How do I determine my current default window manager, what should the default be, and how do I re-assign it? Also, please let me know if you think there may be other issues affecting my custom session. I am new to Linux, so list anything you think might be helpful. Thank you!

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >