Search Results

Search found 53261 results on 2131 pages for 'system state'.

Page 531/2131 | < Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >

  • Lookahead regex produces unexpected group

    - by Ivan Yatskevich
    I'm trying to extract a page name and query string from a URL which should not contain .html Here is an example code in Java: public class TestRegex { public static void main(String[] args) { Pattern pattern = Pattern.compile("/test/(((?!\\.html).)+)\\?(.+)"); Matcher matcher = pattern.matcher("/test/page?param=value"); System.out.println(matcher.matches()); System.out.println(matcher.group(1)); System.out.println(matcher.group(2)); } } By running this code one can get the following output: true page e What's wrong with my regex so the second group contains the letter e instead of param=value?

    Read the article

  • Automate refactor import/using directives, using ReSharper and Visual Studio 2010

    - by Mendy
    I want to automate the Visual Studio 2010 / Resharper 5 auto inserting import directives to put my internal namespaces into the namespace sphere. Like this: using System; using System.Collections.Generic; using System.Linq; using StructureMap; using MyProject.Core; // <--- Move inside. using MyProject.Core.Common; // <--- Move inside. namespace MyProject.DependencyResolution { using Core; using Core.Common; // <--- My internal namespaces to be here! public class DependencyRegistrar { ........... } }

    Read the article

  • SecurityException when trying to export a java resource

    - by thecoop
    I'm trying to get the source of a java resource stored in an oracle database using this code (connecting as SYSTEM for testing): DECLARE javalob CLOB; BEGIN DBMS_LOB.CREATETEMPORARY(javalob, false); DBMS_JAVA.EXPORT_RESOURCE('RESOURCENAME', 'SCHEMA', javalob); DBMS_OUTPUT.PUT_LINE(javalob); END; But when I try to run it I get this: Java call terminated by uncaught Java exception: java.lang.SecurityException: cannot read <Resource Handle: RESOURCENAME|SCHEMA|301> because SYSTEM does not have execute privilege on it This thing is, I'm not sure how to grant permissions on <Resource Handle: RESOURCENAME|SCHEMA|301>, as this isn't a SQL or PL/SQL object. And why doesn't SYSTEM have access to it anyway?

    Read the article

  • Write a program that allows the user to enter a string and then prints the letters of the String sep

    - by WM
    The output is always a String, for example H,E,L,L,O,. How could I limit the commas? I want the commas only between letters, for example H,E,L,L,O. import java.util.Scanner; import java.lang.String; public class forLoop { public static void main(String[] args) { Scanner Scan = new Scanner(System.in); System.out.print("Enter a string: "); String Str1 = Scan.next(); String newString=""; String Str2 =""; for (int i=0; i < Str1.length(); i++) { newString = Str1.charAt(i) + ","; Str2 = Str2 + newString; } System.out.print(Str2); } }

    Read the article

  • Mixed Table Type with other types as parameters to Stored Procedured c#

    - by amemak
    Hi, I am asking about how could i pass multi parameters to a stored procedure, one of these parameters is user defined table. When I tried to do it it shows this error: INSERT INTO BD (ID, VALUE, BID) values( (SELECT t1.ID, t1.Value FROM @Table AS t1),someintvalue) here @Table is the user defined table parameter. Msg 116, Level 16, State 1, Procedure UpdateBD, Line 12 Only one expression can be specified in the select list when the subquery is not introduced with EXISTS. Msg 109, Level 15, State 1, Procedure UpdateBD, Line 11 There are more columns in the INSERT statement than values specified in the VALUES clause. The number of values in the VALUES clause must match the number of columns specified in the INSERT statement. Thank you

    Read the article

  • How would you start automating my job? - Part 2

    - by Jurily
    (Followup to this question) After surviving the first wave of incoming shipments (9 hours of copy/paste), I now believe I have all the requirements. Here is the updated workflow: Monkey collects email attachments (4 Excel spreadsheets, 1 PDF) Monkey creates central database, does complex calculations (right now this is also an Excel spreadsheet) Monkey sends data to two bosses, who set the retail prices independently; first one to reply wins Monkey sends order form to our other warehouses, also Excel Monkey sends spreadsheets to VIP customers, carefully sanitized and formatted (4 different discount categories) Jurily enters the data into the accounting system. I've given up on automating this part, there's too much business logic involved, and the database is a pile of sh^W legacy My question: What technologies would you use for a quick and dirty solution? I'm mostly sold on C#, but coming from a Linux/C++ background, I'm horribly confused about my choices in Microsoft-land. For bonus points: How would you redesign the whole system from the ground up? P.S. in case you were wondering, my job title is System Administrator.

    Read the article

  • Android bluetooth socket error

    - by ashwini
    I am using backport bluetooth api on android 1.6. I am using Google Bluetooth Chat sample app for testing. The app works fine in normal scenarios. In a scenario, when I try to connect to paired device which is in off state, I get following error. 01-04 09:00:11.629: ERROR/BluetoothEventLoop.cpp(84): onGetRemoteServiceChannelResult: D-Bus error: org.bluez.Error.ConnectionAttemptFailed (Host is down) 01-04 09:00:11.729: DEBUG/dalvikvm(128): GC freed 4535 objects / 256008 bytes in 296ms 01-04 09:00:21.880: ERROR/bluetooth_RfcommSocket.cpp(1433): connect error: Host is down (112) But it sets the state as connected. The app is unable to catch the exception. Why does it happen? Or is it the case with backport api? Any help is appreciated as I am struggling a lot to get things run fine.

    Read the article

  • Using java.util.logging, is it possible to restart logs after a certain period of time?

    - by Fry
    I have some java code that will be running as an importer for data for a much larger project. The initial logging code was done with the java.util.logging classes, so I'd like to keep it if possible, but it seems to be a little inadequate now given he amount of data passing through the importer. Often times in the system, the importer will get data that the main system doesn't have information for or doesn't match the system's data so it is ignored but a message is written to the log about what information was dropped and why it wasn't imported. The problem is that this tends to grow in size very quickly, so we'd like to be able to start a fresh log daily or weekly. Does anybody have an idea if this can be done in the logging classes or would I have to switch to log4j or custom? Thanks for any help!

    Read the article

  • Send file by webservice

    - by phenevo
    Hi, I have webservice, wwith method: [WebMethod] public byte[] GetFile(string FName) { System.IO.FileStream fs1 = null; fs1 = System.IO.File.Open(FName, FileMode.Open, FileAccess.Read); byte[] b1 = new byte[fs1.Length]; fs1.Read(b1, 0, (int)fs1.Length); fs1.Close(); return b1; } and it works with small file like 1mb, but when it comes to photoshop's file (about 1,5gb) I get: System.OutOfMemoryException The idea is I have winforms application which get this file and saving it on local disc.

    Read the article

  • How to slim windows vista after tons of auto update?

    - by royee
    Hi all, I have a 60G for my system drive. But after more than 1 year of use, the windows folder itself becomes 20G after tons of auto update. I know that Windows installer will automatically backup the patched file for future recovery. But I don't need them. How can I slim my system? Thanks in advance!

    Read the article

  • Compressing large text data before storing into db?

    - by Steel Plume
    Hello, I have application which retrieves many large log files from a system LAN. Currently I put all log files on Postgresql, the table has a column type TEXT and I don't plan any search on this text column because I use another external process which nightly retrieves all files and scans for sensitive pattern. So the column value could be also a BLOB or a CLOB, but now my question is the following, the database has already its compression system, but could I improve this compression manually like with common compressor utilities? And above all WHAT IF I manually pre-compress the large file and then I put as binary into the data table, is it unuseful as database system provides its internal compression?

    Read the article

  • File Upload in GWT in a Special Case

    - by Maksud
    I am doing a software for a document system. In this system when a user completes a document and want to save it, the document will be uploaded directly to server without the user action. This system uses COM/ActiveX to facilitate user using native editors. Ok, my problem is: suppose I have a file say d:/notepad.txt. Using classical method a user can browse the file and upload it. I can do that with apache commonio and GWT FormPanel and FileUpload. But if I know the filename (d:/notepad.txt), is there any way to upload the file directly to server without the user having to browse the file. I am currently doing this by the ActiveX componenet calling some HttpUpload methods with POST. But that does not maintain session. Thanks

    Read the article

  • Intelligent serial port mocks with Moq

    - by Padu Merloti
    I have to write a lot of code that deals with serial ports. Usually there will be a device connected at the other end of the wire and I usually create my own mocks to simulate their behavior. I'm starting to look at Moq to help with my unit tests. It's pretty simple to use it when you need just a stub, but I want to know if it is possible and if yes how do I create a mock for a hardware device that responds differently according to what I want to test. A simple example: One of the devices I interface with receives a command (move to position x), gives back an ACK message and goes to a "moving" state until it reaches the ordered position. I want to create a test where I send the move command and then keep querying state until it reaches the final position. I want to create two versions of the mock for two different tests, one where I expect the device to reach the final position successfully and the other where it will fail. Too much to ask?

    Read the article

  • mdadm: Win7-install created a boot partition on one of my RAID6 drives. How to rebuild?

    - by EXIT_FAILURE
    My problem happened when I attempted to install Windows 7 on it's own SSD. The Linux OS I used which has knowledge of the software RAID system is on a SSD that I disconnected prior to the install. This was so that windows (or I) wouldn't inadvertently mess it up. However, and in retrospect, foolishly, I left the RAID disks connected, thinking that windows wouldn't be so ridiculous as to mess with a HDD that it sees as just unallocated space. Boy was I wrong! After copying over the installation files to the SSD (as expected and desired), it also created an ntfs partition on one of the RAID disks. Both unexpected and totally undesired! . I changed out the SSDs again, and booted up in linux. mdadm didn't seem to have any problem assembling the array as before, but if I tried to mount the array, I got the error message: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so dmesg: EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! EXT4-fs (md0): group descriptors corrupted! I then used qparted to delete the newly created ntfs partition on /dev/sdd so that it matched the other three /dev/sd{b,c,e}, and requested a resync of my array with echo repair > /sys/block/md0/md/sync_action This took around 4 hours, and upon completion, dmesg reports: md: md0: requested-resync done. A bit brief after a 4-hour task, though I'm unsure as to where other log files exist (I also seem to have messed up my sendmail configuration). In any case: No change reported according to mdadm, everything checks out. mdadm -D /dev/md0 still reports: Version : 1.2 Creation Time : Wed May 23 22:18:45 2012 Raid Level : raid6 Array Size : 3907026848 (3726.03 GiB 4000.80 GB) Used Dev Size : 1953513424 (1863.02 GiB 2000.40 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon May 26 12:41:58 2014 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 4K Name : okamilinkun:0 UUID : 0c97ebf3:098864d8:126f44e3:e4337102 Events : 423 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde Trying to mount it still reports: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so and dmesg: EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! EXT4-fs (md0): group descriptors corrupted! I'm a bit unsure where to proceed from here, and trying stuff "to see if it works" is a bit too risky for me. This is what I suggest I should attempt to do: Tell mdadm that /dev/sdd (the one that windows wrote into) isn't reliable anymore, pretend it is newly re-introduced to the array, and reconstruct its content based on the other three drives. I also could be totally wrong in my assumptions, that the creation of the ntfs partition on /dev/sdd and subsequent deletion has changed something that cannot be fixed this way. My question: Help, what should I do? If I should do what I suggested , how do I do that? From reading documentation, etc, I would think maybe: mdadm --manage /dev/md0 --set-faulty /dev/sdd mdadm --manage /dev/md0 --remove /dev/sdd mdadm --manage /dev/md0 --re-add /dev/sdd However, the documentation examples suggest /dev/sdd1, which seems strange to me, as there is no partition there as far as linux is concerned, just unallocated space. Maybe these commands won't work without. Maybe it makes sense to mirror the partition table of one of the other raid devices that weren't touched, before --re-add. Something like: sfdisk -d /dev/sdb | sfdisk /dev/sdd Bonus question: Why would the Windows 7 installation do something so st...potentially dangerous? Update I went ahead and marked /dev/sdd as faulty, and removed it (not physically) from the array: # mdadm --manage /dev/md0 --set-faulty /dev/sdd # mdadm --manage /dev/md0 --remove /dev/sdd However, attempting to --re-add was disallowed: # mdadm --manage /dev/md0 --re-add /dev/sdd mdadm: --re-add for /dev/sdd to /dev/md0 is not possible --add, was fine. # mdadm --manage /dev/md0 --add /dev/sdd mdadm -D /dev/md0 now reports the state as clean, degraded, recovering, and /dev/sdd as spare rebuilding. /proc/mdstat shows the recovery progress: md0 : active raid6 sdd[4] sdc[1] sde[3] sdb[0] 3907026848 blocks super 1.2 level 6, 4k chunk, algorithm 2 [4/3] [UU_U] [>....................] recovery = 2.1% (42887780/1953513424) finish=348.7min speed=91297K/sec nmon also shows expected output: ¦sdb 0% 87.3 0.0| > |¦ ¦sdc 71% 109.1 0.0|RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR > |¦ ¦sdd 40% 0.0 87.3|WWWWWWWWWWWWWWWWWWWW > |¦ ¦sde 0% 87.3 0.0|> || It looks good so far. Crossing my fingers for another five+ hours :) Update 2 The recovery of /dev/sdd finished, with dmesg output: [44972.599552] md: md0: recovery done. [44972.682811] RAID conf printout: [44972.682815] --- level:6 rd:4 wd:4 [44972.682817] disk 0, o:1, dev:sdb [44972.682819] disk 1, o:1, dev:sdc [44972.682820] disk 2, o:1, dev:sdd [44972.682821] disk 3, o:1, dev:sde Attempting mount /dev/md0 reports: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so And on dmesg: [44984.159908] EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! [44984.159912] EXT4-fs (md0): group descriptors corrupted! I'm not sure what do do now. Suggestions? Output of dumpe2fs /dev/md0: dumpe2fs 1.42.8 (20-Jun-2013) Filesystem volume name: Atlas Last mounted on: /mnt/atlas Filesystem UUID: e7bfb6a4-c907-4aa0-9b55-9528817bfd70 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 244195328 Block count: 976756712 Reserved block count: 48837835 Free blocks: 92000180 Free inodes: 243414877 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 791 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 RAID stripe width: 2 Flex block group size: 16 Filesystem created: Thu May 24 07:22:41 2012 Last mount time: Sun May 25 23:44:38 2014 Last write time: Sun May 25 23:46:42 2014 Mount count: 341 Maximum mount count: -1 Last checked: Thu May 24 07:22:41 2012 Check interval: 0 (<none>) Lifetime writes: 4357 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: e177a374-0b90-4eaa-b78f-d734aae13051 Journal backup: inode blocks dumpe2fs: Corrupt extent header while reading journal super block

    Read the article

  • Can a 32-bit RHEL4 userland work with a 64-bit kernel?

    - by James
    Is there a way to change an i386 RHEL4 machine to run an amd64 kernel, but ensure that it still builds software into same i386 binaries? On Debian this seems quite straightforward: just install an amd64 kernel (worst case, build one like this guy: http://www.debian-administration.org/users/jonesy/weblog/1) and prefix everything with "linux32". Then everything that considers uname -m will be unchanged, I just need to handle the few cases that consider uname -r. What is the Red Hat equivalent? Is the only way a full 64-bit installation on another disk and then chrooting back to the 32-bit system before anyone builds anything? (Even the best examples of that seem to be Debian-based.) Background: We make a large system that runs on (a variant of) i386 RHEL4. However, some of the larger RHEL build machines now have enough RAM that they might benefit from going 64-bit (for the kernel and maybe some of the bigger build steps). Our build system doesn't support cross-compilation.

    Read the article

  • What Are Basic Tools For A New Project?

    - by Morgan Cheng
    For a long time, I thought that to start a new project we only need 3 basic tools. 1) A Build System (e.g. Maven & CruiseControl) 2) A Version Control System (e.g. CVS & SVN & GIT) 3) A Bug Tracking System (e.g. Bugzilla) Yesterday, a senior guy told me that we need at least one thing more. That is KPI(Key Performance Index). Without KPI, it is impossible to measure whether the project is progressing well or not. KPI is kind of SOFT tool compared to Maven/SVN/Bugzilla. I believe since I missed SOFT tools, there must be some other kind of tools I missed. So, anybody get some ideas what other basic tools necessary for a new project?

    Read the article

  • What causes my borderless C++/CLI app to crash when overriding WndProc?

    - by Ste
    I use a form with border NONE. I need to override WndProc for resize and move form. However, using this code, my app crashes! static const int WM_NCHITTEST = 0x0084; static const int HTCLIENT = 1; static const int HTCAPTION = 2; protected: virtual void Form1::WndProc(System::Windows::Forms::Message %m) override { switch (m.Msg) { case WM_NCHITTEST: if (m.Result == IntPtr(HTCLIENT)) { m.Result = IntPtr(HTCAPTION); } break; } Form1::WndProc(m); } virtual System::Windows::Forms::CreateParams^ get() override { System::Windows::Forms::CreateParams^ cp = __super::CreateParams; cp->Style |= 0x40000; return cp; } How can I fix my code not to crash but still allow my form to be moved and resized?

    Read the article

  • Hang while starting several daemons

    - by Adrian Lang
    I’m running a Debian Squeeze AMD64 server. Target runlevel after boot is runlevel 2, which includes rsyslogd, cron, sshd and some other stuff, but not dovecot, postfix, apache2, etc. The system fails to reach runlevel 2 with several symptoms: The system hangs at trying to start rsyslogd Booting into runlevel 1 works, then login from the console works Starting rsyslogd from runlevel 1 via /etc/init.d/rsyslog hangs Starting runlevel 2 with rsyslogd disabled works But then, logging in via console fails: I get the motd, and then nothing Starting sshd from runlevel 1 succeeds But then, I cannot login via ssh. Sometimes password ssh login gives me the motd and then nothing, sometimes not even this. Trying to offer a public key seems to annoy the sshd enough to not talk to me any further. When rebooting from runlevel 1, the server hangs at trying to stop apache2 (which is not running, so this really should be trivial). Trying to stop apache2 when logged in in runleve 1 does hang as well. And that’s just the stuff which fails all the time. RAM has been tested, dmesg shows no problems. I have no clue. Update: (shortened) output from rsyslogd -c4 -d called in runlevel 1 rsyslogd 4.6.4 startup, compatibility mode 4, module path '' caller requested object 'net', not found (iRet -3003) Requested to load module 'lmnet' loading module '/user/lib/rsyslog/lmnet.so' module of type 2 being loaded conf.c requested ref for 'lmnet', refcount 1 rsylog runtime initialized, version 4.6.4, current users 1 syslogd.c requested ref for 'lmnet', refcount now 2 I can kill rsyslogd with Strg+C, then. /var/log shows none of the configured log files, though. Update2: Thanks to @DerfK I still have no clue, but at least I narrowed down the problem. I’m now testing with /etc/init.d/apache2 stop (without an apache2 running, of course) which hangs as well and looks like an even more obvious failure. After some testing I found out that a file with one single line: /usr/sbin/apache2ctl configtest /dev/null 2&1 hangs, while the same line executed in an interactive shell works. I was not able to further reduce this line while, i. e. every single part, the stream redirections and the commando itself is necessary to reproduce the hang. @DerfK also pointed me to strace which gave a shallow hint about what kind of hang we have here: wait4(-1for the init scripts futex(0xsomepointer, FUTEX_WAIT_PRIVATE, 2, NULL for rsyslogd / apache2 binaries called by the init scripts The system was installed as a Debian Lenny by my hoster in autumn 2011, I upgraded it to Squeeze immediately and kept it up to date with Squeeze, which then used to be testing. There were no big changes, though. I guess I never tried to reboot the system before.

    Read the article

  • Why won't this SQL CAST work?

    - by Kev
    I have a nvarchar(50) column in a SQL Server 2000 table defined as follows: TaskID nvarchar(50) NULL I need to fill this column with some random SQL Unique Identifiers (I am unable to change the column type to uniqueidentifier). I tried this: UPDATE TaskData SET TaskID = CAST(NEWID() AS nvarchar) but I got the following error: Msg 8115, Level 16, State 2, Line 1 Arithmetic overflow error converting expression to data type nvarchar. I also tried: UPDATE TaskData SET TaskID = CAST(NEWID() AS nvarchar(50)) but then got this error: Msg 8152, Level 16, State 6, Line 1 String or binary data would be truncated. I don't understand why this doesn't work but this does: DECLARE @TaskID nvarchar(50) SET @TaskID = CAST(NEW() AS nvarchar(50)) I also tried CONVERT(nvarchar, NEWID()) and CONVERT(nvarchar(50), NEWID()) but got the same errors.

    Read the article

  • SecurityException from Activator.CreateInstance(), How to grant permissons to Assembly?

    - by user365164
    I have been loading an assembly via Assembly.LoadFrom(@"path"); and then doing Type t = asm.GetType("Test.Test"); test = Activator.CreateInstance(t, new Object[] { ... }); and it was working fine, but now I moved the dll I am getting the following System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. --- System.Security.SecurityException: Request for the permission of type 'System.Security.Permissons.SecurityPermission, etc .. For the sake of brevity it seems the demand was for an PermissionSet that allowed ControlAppDomain and it's not getting it. My question is how can I create this permissionset and pass it to the instance or assembly? I've been googling for hours to no avail.

    Read the article

  • Custom Expression in Linq-to-Sql Designer

    - by csharpnoob
    According to Microsoft: http://msdn.microsoft.com/de-de/library/system.data.linq.mapping.columnattribute.expression.aspx It's possible to add expression to the Linq-to-SQL Mapping. But how to configure or add them in Visual Studio in the Designer? Problem, when I add it manual to thex XYZ.designer.cs it on change it will be lost. //------------------------------------------------------------------------------ // <auto-generated> // This code was generated by a tool. // Runtime Version:2.0.50727.4927 // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------ This is generated: [Column(Name="id", Storage="_id", DbType="Int")] public System.Nullable<int> id { ... But i need something like this [Column(Name="id", Storage="_id", DbType="Int", Expression="Max(id)")] public System.Nullable<int> id { ... Thanks.

    Read the article

< Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >