Search Results

Search found 29473 results on 1179 pages for 'solaris 10'.

Page 785/1179 | < Previous Page | 781 782 783 784 785 786 787 788 789 790 791 792  | Next Page >

  • From a Perl test file, how do I check the contents of a file?

    - by justintime
    I want to test a script I have written in Perl and specifically check what output it writes to file. I wrote it some time ago and don't want to modify it to the extent of turning it into a module but would like to regression test it before adding some small functional changes. So far I have use Test::Command tests => 10; exit_is_num($cmd, 0); .... But the command produces some files and I want to check those files are the same as I expect (either equal or match some regexp). Any suggestions

    Read the article

  • struct assignment operator on arrays

    - by Django fan
    Suppose I defined a structure like this: struct person { char name [10]; int age; }; and declared two person variables: person Bob; person John; where Bob.name = "Bob", Bob.age = 30 and John.name = "John",John.age = 25. and I called Bob = John; struct person would do a Memberwise assignment and assign Johns's member values to Bob's. But arrays can't assign to arrays, so how does the assignment of the "name" array work?

    Read the article

  • "'data(...).options' is null or not an object" in jquery-ui

    - by ripper234
    I'm using jquery-ui 1.8, and getting this error in Internet Explorer: Webpage error details User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E) Timestamp: Mon, 10 May 2010 06:26:48 UTC Message: 'data(...).options' is null or not an object Line: 75 Char: 13074 Code: 0 URI: http://localhost:58365/Scripts/Lib/jquery-ui-1.8.custom.min.js Is this a known bug? Is there a workaround? The error happens when I use droppable/draggable.

    Read the article

  • BinaryFormatter with MemoryStream Question

    - by Changeling
    I am testing BinaryFormatter to see how it will work for me and I have a simple question: When using it with the string HELLO, and I convert the MemoryStream to an array, it gives me 29 dimensions, with five of them being the actual data towards the end of the dimensions: BinaryFormatter bf = new BinaryFormatter(); MemoryStream ms = new MemoryStream(); byte[] bytes; string originalData = "HELLO"; bf.Serialize(ms, originalData); ms.Seek(0, 0); bytes = ms.ToArray(); returns - bytes {Dimensions:[29]} byte[] [0] 0 byte [1] 1 byte [2] 0 byte [3] 0 byte [4] 0 byte [5] 255 byte [6] 255 byte [7] 255 byte [8] 255 byte [9] 1 byte [10] 0 byte [11] 0 byte [12] 0 byte [13] 0 byte [14] 0 byte [15] 0 byte [16] 0 byte [17] 6 byte [18] 1 byte [19] 0 byte [20] 0 byte [21] 0 byte [22] 5 byte [23] 72 byte [24] 69 byte [25] 76 byte [26] 76 byte [27] 79 byte [28] 11 byte Is there a way to only return the data encoded as bytes without all the extraneous information?

    Read the article

  • CentOS OpenVZ fail to boot after kernel update

    - by SkechBoy
    After upgrading to latest OpenVZ kernel CentOS server won't boot. When i try go boot the latest kernel server is stuck at this point: (note that images are taken from virtual kvm) http://i.stack.imgur.com/4lusz.jpg Then i try to start the server on some old kernels and than i get this error message: kernel panic - not syncing - attempted to kill init better shown on this image: http://i.stack.imgur.com/2SReF.jpg Here is some useful information fdisk -l WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 2995.7 GB, 2995739688960 bytes 255 heads, 63 sectors/track, 364211 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0004c4e4 Device Boot Start End Blocks Id System /dev/sda1 1 523 4199044+ 82 Linux swap / Solaris /dev/sda2 524 785 2104515 83 Linux /dev/sda3 786 261869 2097157230 83 Linux /dev/sda4 261870 364211 822062115 83 Linux /etc/fstab proc /proc proc defaults 0 0 none /dev/pts devpts gid=5,mode=620 0 0 /dev/sda1 none swap sw 0 0 /dev/sda2 /boot ext3 defaults 0 0 /dev/sda3 / ext3 defaults 0 0 /dev/sda4 /home ext3 defaults 0 0 and grub config file: title OpenVZ (2.6.18-274.18.1.el5.028stab098.1) root (hd0,1) kernel /vmlinuz-2.6.18-274.18.1.el5.028stab098.1 ro root=/dev/sda3 vga=0x317 selinux=0 initrd /initrd-2.6.18-274.18.1.el5.028stab098.1.img title OpenVZ (2.6.18-274.7.1.el5.028stab095.1) root (hd0,1) kernel /vmlinuz-2.6.18-274.7.1.el5.028stab095.1 ro root=/dev/sda3 vga=0x317 selinux=0 initrd /initrd-2.6.18-274.7.1.el5.028stab095.1.img title OpenVZ (2.6.18-194.8.1.el5.028stab070.4) root (hd0,1) kernel /vmlinuz-2.6.18-194.8.1.el5.028stab070.4 ro root=/dev/sda3 vga=0x317 initrd /initrd-2.6.18-194.8.1.el5.028stab070.4.img Any help is greatly appreciated Thanks.

    Read the article

  • Ubuntu 13.04 to 13.10: Filesystem check or mount failed [migrated]

    - by SamHuckaby
    I attempted to upgrade from Ubuntu 13.04 to 13.10 today, and mid upgrade the system started flaking out, and eventually locked up entirely. I was forced to restart the computer, and am now unable to get the computer to boot up at all. When I boot currently, it takes me to the GRUB menu, and I can choose to boot normally, or boot in an older version. I have tried several things, which I list below, but no matter what, when I try to finish booting into Ubuntu, I receive the following error: Filesystem check or mount failed. A maintenance shell will now be started. CONTROL-D will terminate this shell and continue booting after re-trying filesystems. Any further errors will be ignored root@ubuntu-computername:~# I have fun fsck -f and everything appears correct, no errors are reported. and it passes all 5 checks. If I run fdisk -l then I get the following information: Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 4096 bytes / 4096 bytes Disk identifier: 0x00010824 Device Boot Start End Blocks Id System /dev/sda1 * 2048 608456703 304227328 83 Linux /dev/sda2 608458750 625141759 8341505 5 Extended Partition 2 does not start on physical sector boundary. /dev/sda5 608458752 625141759 8341504 82 Linux swap / Solaris Disk /dev/sdb: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x0fb4b7e8 Device Boot Start End Blocks Id System /dev/sdb1 8192 625139711 312565760 7 HPFS/NTFS/exFAT I am considering just installing a new OS on the other disk, that currently has nothing on it, and then just attempting to scrape my data off the old disk (thankfully I didn't encrypt the files). Really my question is this: Can I salvage this Ubuntu install, or should I give up and just reinstall?

    Read the article

  • Windows Azure Platform, latest version?

    - by Vimvq1987
    I searched through internet but found nothing. The whitepapers of Windows Azure Platform say something like that: In its first release, the maximum size of a single database in SQL Azure Database is 10 gigabytes A few things are omitted in the technology’s first release, however, such as the SQL Common Language Runtime (CLR) and support for spatial data. (Microsoft says that both will be available in a future version.) I want to know that Microsoft had updated Windows Azure Platform and removed these limits or not? I decided to post this question here instead of Serverfault.com because it's more relative to programming than administration. Thank you

    Read the article

  • Transpose a Collection

    - by Joseph Melettukunnel
    Hello, I've a list of different sizes of a T-Shirt, e.g. S, M, L. Since this might change for T-Shirts (sometimes we just have e.g. M, L), we load this into a List sizes. Since most DataGrids (xamDataGrid, WPF Toolkit DataGrid) need Properties for binding to the Columns, I'd like to transpose somehow my data. Does anyone have an idea how to do this? E.g. Instead of having List where Size { string sizeName, int available, int defect, int ordered} Avail. Defect Ordered [S] 1 2 3 [M] 1 2 3 [L] 1 2 3 I want an Object which has the Properties S, M, L containing the Values like this: [S] [M] [L] Avail. 1 2 3 Defect 1 2 3 Ordered 1 2 3 The problem here is that I don't know how many sizes will be available for the tshirt, it might be 3, 4, or 10. Thanks for any help Cheers PS: Here is a mockup of how the final grid should look like http://img39.imageshack.us/img39/9161/multirowspangridfixedel.png

    Read the article

  • Exporting XML from FIleMaker Pro Server

    - by Jeno
    FileMaker Pro 10 Server: Mac OS X Server 10.4.11 DAtA Server: Windows Server 2008 I am having problem cross platform issue when exporting XML from FileMaker Pro client on a Mac to DATA Server. My FileMaker Pro server is hosting on a Mac OS X and the I need user to export their data to a DATA server which is hosting on a Windows Server. I created a button(function/script) in the FileMaker form for user to export data once they finish with their job. FileMaker Pro client on the PC works perfectly but It does't work on the MAC. I've tried every combination I can think of for the location path as documented on: http://www.filemaker.com/11help/html/create_db.8.32.html#1030283 Any idea? Thanks

    Read the article

  • Running Perl Scripts on servers that don't have the modules

    - by envinyater
    I need to run a perl script to gather system information that will be deployed and executed on different unix servers. Right now I am writing it and testing it, and I'm receiving this error. Can't locate XML/DOM.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at test.pl line 7. BEGIN failed--compilation aborted at test.pl line 7. So I am simply using XML::DOM which should be part of Perl but it isn't for this version on this particular server which is 5.10.1. Anyways, is there a way I can create and design my script and package modules into it while keeping the .pl extension, which is the requirement for this script?

    Read the article

  • Java - SwingWorker - Can we call one SwingWorker from other SwingWorker instead of EDT

    - by Yatendra Goel
    I have a SwingWorker as follows: public class MainWorker extends SwingWorker(Void, MyObject) { : : } I invoked the above Swing Worker from EDT: MainWorker mainWorker = new MainWorker(); mainWorker.execute(); Now, the mainWorker creates 10 instances of a MyTask class so that each instance will run on its own thread so as to complete the work faster. But the problem is I want to update the gui from time to time while the tasks are running. I know that if the task was executed by the mainWorker itself, I could have used publish() and process() methods to update the gui. But as the tasks are executed by threads different from the Swingworker thread, how can I update the gui from intermediate results generated by threads executing tasks.

    Read the article

  • speed up a sql query to mysql?

    - by fayer
    in my mysql database i've got the geonames database, containing all countries, states and cities. i am using this to create a cascading menu so the user could select where he is from: country - state - county - city. but the main problem is that the query will search through all the 7 millions rows in that table each time i want to get the list of children rows, and that is taking a while 10-15 seconds. i wonder how i could speed this up: caching? table views? reorganizing table structure somehow? and most important, how do i do these things? are there good tutorials you could link to me? i appreciate all help and feedback discussing smart ways of handling this issue!

    Read the article

  • performance issue in a select query from a single table

    - by daedlus
    Hi , I have a table as below dbo.UserLogs ------------------------------------- Id | UserId |Date | Name| P1 | Dirty ------------------------------------- There can be several records per userId[even in millions] I have clustered index on Date column and query this table very frequently in time ranges. The column 'Dirty' is non-nullable and can take either 0 or 1 only so I have no indexes on 'Dirty' I have several millions of records in this table and in one particular case in my application i need to query this table to get all UserId that have at least one record that is marked dirty. I tried this query - select distinct(UserId) from UserLogs where Dirty=1 I have 10 million records in total and this takes like 10min to run and i want this to run much faster than this. [i am able to query this table on date column in less than a minute.] Any comments/suggestion are welcome. my env 64bit,sybase15.0.3,Linux

    Read the article

  • Classical Round Table algorithm?

    - by user1795954
    Coins with different value are spread in circle around a round table . We can choose any coin such that for any two adjacent pair of coins , atleast one must be selected (both maybe selected too) . In such condition we have to find minimum possible value of coins selected . I have to respect time complexity so instead of using naive recursive bruteforce , i tried doing it using dynamic programming . But i get Wrong Answer - my algorithm is incorrect . If someone could suggest an algorithm to do it dynamically , i could code myself in c++ . Also maximum number of coins is 10^6 , so i think O(n) solution exists .

    Read the article

  • Create Custom Builds of an Xcode Project

    - by macinjosh
    I am going to build a Mac application written in Obj-C with Xcode. For argument's sake let's say it will have 10 optional features. I need a way to enable or disable those features to create custom builds of the application. These builds would be automated (most likely through the Mac OS X Terminal) so I would need a way to state which of these features are enabled/disabled at build time (a configuration file or CLI arguments would be ideal.) So what is the best way to accomplish this? I'm trying to plan this out before I start coding so that there is proper separation in my code base to allow for these features to come and go. Ideally the custom build would only contain compiled code for the features it should have. In other words I don't want to always compile all the features and condition them out at runtime.

    Read the article

  • Select and copy to MySQL table PHP

    - by Liju
    Can insert the table1 value to Table2 like the follows.. based on Name Date. Table1 Id Date Name time 1 20/11/2010 Tom 08:00 2 20/11/2010 Tom 08:30 3 20/11/2010 Tom 09:00 4 20/11/2010 Tom 09:30 5 20/11/2010 Tom 10:00 6 20/11/2010 Tom 10:30 7 20/11/2010 Tom 11:30 8 20/11/2010 Tom 14:30 9 20/11/2010 John 08:10 10 20/11/2010 John 09:30 11 20/11/2010 John 11:00 12 20/11/2010 John 13:00 13 20/11/2010 John 14:30 14 20/11/2010 John 16:00 15 20/11/2010 John 17:30 16 20/11/2010 John 19:00 17 20/11/2010 Ram 08:05 18 20/11/2010 Ram 08:30 19 20/11/2010 Ram 09:00 20 20/11/2010 Ram 09:45 21 20/11/2010 Ram 12:00 22 20/11/2010 Ram 13:30 23 20/11/2010 Ram 15:00 Table2 Id Date Name Time In1 Time Out1 Time In1 Time Out1 Time In1 Time Out1 Time In4 Time Out4 1 20/11/2010 Tom 08:00 08:30 09:00 09:30 10:00 10:30 11:30 14:30 2 20/11/2011 John 08:10 09:30 11:00 13:00 14:30 16:00 17:30 19:00 3 20/11/2012 Ram 08:05 08:30 09:00 09:45 12:00 13:30 15:00 Null Help me Please... Liju

    Read the article

  • popen fails with "sh: <command>: not found"

    - by smallmeans
    I'm developing a server application and I recently encountered this wierd error on a testing server (Debian Squeeze). Every executable I pass to popen fails with a msg: sh: sort: not found // happens to any command This happens regardless whether I point to the full path returned by "type" or keep it short . As mentioned earlier, this happens at only one testing environment, to add confusion, am running the same OS and had no problem whatsoever. Popen is apparently using sh to execute commands, but if I run the same command thru the prompt (bash or sh), everything's fine Thanks in advance (PS: even tried Python os.popen just to nail this head scratcher, and it works!) Edit this is a simple call that fails: $command="tail -10 myfile"; $handle = popen($command.' 2>&1','r'); if($handle){ while (!feof($handle)){ ....//process buffer } } returns: sh: tail: not found

    Read the article

  • Problem with WiX Votive 3.0 preprocessor

    - by Leith Bade
    I have just started using WiX for the first time. I added a WiX Votive project to my existing C project. To automatically select the correct source folder for the binaries add used the following: <Directory Id="INSTALLLOCATION" Name="Trapeze Capture For Objective" FileSource="$(var.CaptureForObjective.TargetDir)"> That results in the following error: 1C:\code\CaptureForObjective\Installer\Product.wxs(10,0): error CNDL0150: Undefined preprocessor variable '$(var.CaptureForObjective.TargetDir)'. The C project is called CaptureForObjective, and the WiX project is called Installer. What do I need to do to get this to work?

    Read the article

  • Running Sybase ISQL scripts from windows batch file

    - by user1328709
    I have already researched on this site as well as on google extensively for this. I have created a number of batch files that perform certain automated transactions(backups etc) on our production database. i want to further simplify my end of day processes by taking the dumps using a script that accepts input of some parameters. the script is able to login the isql prompt but unable to do the execution of the commands. @ECHO ***Started*** @ECHO Enter MonthDay(MMDD) SET /p md= @ECHO %md% mkdir \\10.20.1.17\arch\212%md%_banking set run=isql -Uuser -SORBITS -Ppass %run% @echo dump database banking to '/media/newArch/212%md%_banking/212%md%EOD_banking.dmp' with compression=5 @echo dump database master to '/media/newArch/212%md%_banking/212%md%EOD_master.dmp' @echo go pause I have been unsuccessful at putting these in a separate script file because the script itself uses a passed parameter. Please give me hints and links to Thanks

    Read the article

  • IA-32: Pushing a byte onto a stack isn't possible on Pentium, why?

    - by Tim Green
    Hi, I've come to learn that you cannot push a byte directly onto the Intel Pentium's stack, can anyone explain this to me please? The reason that I've been given is because the esp register is word-addressable (or, that is the assumption in our model) and it must be an "even address". I would have assumed decrementing the value of some 32-bit binary number wouldn't mess with the alignment of the register, but apparently I don't understand enough. I have tried some NASM tests and come up that if I declare a variable (bite db 123) and push it on to the stack, esp is decremented by 4 (indicating that it pushed 32-bits?). But, "push byte bite" (sorry for my choice of variable names) will result in a kind error: test.asm:10: error: Unsupported non-32-bit ELF relocation Any words of wisdom would be greatly appreciated during this troubled time. I am first year undergraduate so sorry for my naivety in any of this. Tim

    Read the article

  • Rails: Cannot add :precision or :scale options with change_column in a migration?

    - by Josh Pinter
    This seems to have been asked before: http://stackoverflow.com/questions/1402547/rails-decimal-precision-and-scale But when running a change_column migration for :precision or :scale they don't actually affect the schema or database, but db:migrate runs without errors. My migration file looks like this: class ChangePrecisionAndScaleOfPaybackPeriodInTags < ActiveRecord::Migration def self.up change_column :tags, :payback_period, :decimal, { :scale => 3, :precision => 10 } end def self.down change_column :tags, :payback_period, :decimal end end But my schema (and the data) remains as: t.decimal "rate" # previous column t.decimal "payback_period" t.string "component_type" # next column Anybody else have this issue? Thanks, Josh

    Read the article

  • How to create Shared VB Array Initialisors for NerdDinner

    - by David A Gibson
    Hello, I am trying to work my way through the NerdDinner tutorial - and as an exercise I'm converting it to VB as I go. I'm not very far in and after having gotten past the C# Yield statement I'm stuck on Shared VB Array Initialisors. static IDictionary<string, Regex> countryRegex = new Dictionary<string, Regex>() { { "USA", new Regex("^[2-9]\\d{2}-\\d{3}-\\d{4}$")}, { "UK", new Regex("(^1300\\d{6}$)|(^1800|1900|1902\\d{6}$)|(^0[2|3|7|8]{1}[0- 9]{8}$)|(^13\\d{4}$)|(^04\\d{2,3}\\d{6}$)")}, { "Netherlands", new Regex("(^\\+[0-9]{2}|^\\+[0- 9]{2}\\(0\\)|^\\(\\+[0-9]{2}\\)\\(0\\)|^00[0-9]{2}|^0)([0-9]{9}$|[0-9\\- \\s]{10}$)")}, Can anyone please help me write this in VB? Public Shared countryRegex As IDictionary(Of String, Regex) = New Dictionary(Of String, Regex)() {("USA", New Regex("^[2-9]\\d{2}-\\d{3}-\\d{4}$"))} This code has an error as it does not accept the String and the Regex as an item for the array. Thanks

    Read the article

  • Rake don't know how to build task?

    - by Schroedinger
    Using a rake task to import data into a database: file as follows namespace :db do desc "load imported data from csv" task :load_csv_data => :environment do require 'fastercsv' require 'chronic' FasterCSV.foreach("input.csv", :headers => true) do |row| Trade.create( :name => row[0], :type => row[4], :price => row[6].to_f, :volume => row[7].to_i, :bidprice => row[10].to_f, :bidsize => row[11].to_i, :askprice => row[14].to_f, :asksize => row[15].to_i ) end end end When attempting to use this, with the right CSV files and the other elements in place, it will say Don't know how to build task 'db:import_csv_data' I know this structure works because I've tested it, I'm just trying to get it to convert to the new values on the fly. Suggestions?

    Read the article

  • How to override TOMCAT Oracle ojdbc14 driver in the application?

    - by Luís Henrique Rocha
    The TOMCAT server is using an Oracle 9G ojdbc14 driver to its jndi connections in the /common/lib folder. My web application uses Maven + Spring and I'm getting the dataSource using Spring jndi features. I'm trying to bypass TOMCAT old ojdbc14 driver with a newer one (ojdbc14 10.2.0.4.0). I've tried putting the jars in the WEB-INF/lib folder as a project dependency, but it doesn't work the application keeps using the old oracle driver that is in the TOMCAT folder. I'm trying to bypass the TOMCAT oracle driver because I cannot update it to the newest version because there are lots of other projects using it. Does anyone have a clue?

    Read the article

  • m2eclipse resource filtering

    - by drewzilla
    I've having problems with resource filtering using m2eclipse Maven support in Eclipse. It seems that filtering only takes place on resources that have changed. This is fundamentally flawed because, if I have a file that references properties (e.g. ${my.property}, if the value of the property changes, the filtering will only be performed if the referencing file is also modified - if I only change the property value (in my pom.xml), the filtering is not applied to the files that that reference it. So, if I make a change to a property in my pom file, the filtering is not applied. However, if I then go to the file that references that property (e.g. a Spring config file) then edit and save it, the filtering is applied. I did read somewhere that: "m2eclipse skips filtering if there were no resource changes during incremental build" I'm using m2eclipse 0.10.x Has anyone else come across this? Thanks, Andrew

    Read the article

< Previous Page | 781 782 783 784 785 786 787 788 789 790 791 792  | Next Page >