Search Results

Search found 52589 results on 2104 pages for 'read table'.

Page 293/2104 | < Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >

  • Grouped table with image in first row (and section), the remaining ones having text with different l

    - by Structurer
    Hi I have a grouped table with two sections where I display text with different length and therefore cell height. I have solved the problem with the different length/height with constrainedToSize in cellForRowAtIndexPath. Now I want to add a section with one single row in which I want to show a picture. How should I best implement that? (still a bit of a beginner with the Objective C) I would guess that I would have to create an UIImageView and somehow link that to the cell, but my biggest concern is how do I do with the dequeueReusableCellWithIdentifier now that I will have two different type of cells? Appreciate any advice that will help me save time from trial and error!

    Read the article

  • Any Way to Use a Join in a Lambda Where() on a Table<>?

    - by lush
    I'm in my first couple of days using Linq in C#, and I'm curious to know if there is a more concise way of writing the following. MyEntities db = new MyEntities(ConnString); var q = from a in db.TableA join b in db.TableB on a.SomeFieldID equals b.SomeFieldID where (a.UserID == CurrentUser && b.MyField == Convert.ToInt32(MyDropDownList.SelectedValue)) select new { a, b }; if(q.Any()) { //snip } I know that if I were to want to check the existence of a value in the field of a single table, I could just use the following: if(db.TableA.Where(u => u.UserID == CurrentUser).Any()) { //snip } But I'm curious to know if there is a way to do the lambda technique, but where it would satisfy the first technique's conditions across those two tables. Sorry for any mistakes or clarity, I'll edit as necessary. Thanks in advance.

    Read the article

  • MySQL query to find the most popular value in a column joined by another value in a second table

    - by Budove
    I have two tables: users: user_id, user_zip settings: user_id, pref_ex_loc I need to find the single most popular 'pref_ex_loc' from the settings table based on a particular user_zip, which will be specified as the variable $userzip. Here is the query that I have now and obviously it doesn't work. $popularexloc = "SELECT pref_ex_loc, user_id COUNT(pref_ex_loc) AS countloc FROM settings FULL OUTER JOIN users ON settings.user_id = users.user_id WHERE users.user_zip='$userzip' GROUP BY settings.pref_ex_loc ORDER BY countloc LIMIT 1"; $popexloc = mysql_query($popularexloc) or die('SQL Error :: '.mysql_error()); $exlocrow = mysql_fetch_array($popexloc); $mostpopexloc=$exlocrow[0]; echo '<option value="'.$mostpopexloc.'">'.$mostpopexloc.'</option>'; What am I doing wrong here? I'm not getting any kind of error from this either.

    Read the article

  • Shell command slow when using pipe, fast with intermediate file

    - by plang
    Does anyone understand this huge difference in processing time, when using an intermediate file, or when using a pipe? I'm converting tiff to pdf using standard tools on a fresh debian squeeze server. A standard way of doing this is to convert to ps first. Without pipe: root@web5:~# time tiff2ps test.tif > test.ps real 0m0.860s user 0m0.744s sys 0m0.112s root@web5:~# time ps2pdf13 -sPAPERSIZE=a4 test.ps > test.pdf real 0m0.667s user 0m0.612s sys 0m0.060s With pipe: root@web5:~# time tiff2ps test.tif | ps2pdf13 -sPAPERSIZE=a4 - > test.pdf real 1m6.098s user 0m15.861s sys 0m50.9 During the last command, gs process is at 100% all the time. Update: Here is an strace output for the ps generation: root@web5:~# strace tiff2ps test.tif > test.ps execve("/usr/bin/tiff2ps", ["tiff2ps", "test.tif"], [/* 28 vars */]) = 0 brk(0) = 0x1395000 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb5a1937000 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) open("/etc/ld.so.cache", O_RDONLY) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=21735, ...}) = 0 mmap(NULL, 21735, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7fb5a1931000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/usr/lib/libtiff.so.4", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0P\200\0\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0644, st_size=405128, ...}) = 0 mmap(NULL, 2501416, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb5a14b9000 mprotect(0x7fb5a151a000, 2093056, PROT_NONE) = 0 mmap(0x7fb5a1719000, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x60000) = 0x7fb5a1719000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/usr/lib/libjpeg.so.62", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\3408\0\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0644, st_size=145048, ...}) = 0 mmap(NULL, 2240080, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb5a1296000 mprotect(0x7fb5a12b9000, 2093056, PROT_NONE) = 0 mmap(0x7fb5a14b8000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x22000) = 0x7fb5a14b8000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/usr/lib/libz.so.1", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\260\"\0\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0644, st_size=93936, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb5a1930000 mmap(NULL, 2188976, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb5a107f000 mprotect(0x7fb5a1096000, 2093056, PROT_NONE) = 0 mmap(0x7fb5a1295000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x16000) = 0x7fb5a1295000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/lib/libm.so.6", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\360>\0\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0644, st_size=530736, ...}) = 0 mmap(NULL, 2625768, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb5a0dfd000 mprotect(0x7fb5a0e7d000, 2097152, PROT_NONE) = 0 mmap(0x7fb5a107d000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x80000) = 0x7fb5a107d000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/lib/libc.so.6", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\240\355\1\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=1437064, ...}) = 0 mmap(NULL, 3545160, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7fb5a0a9b000 mprotect(0x7fb5a0bf4000, 2093056, PROT_NONE) = 0 mmap(0x7fb5a0df3000, 20480, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x158000) = 0x7fb5a0df3000 mmap(0x7fb5a0df8000, 18504, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7fb5a0df8000 close(3) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb5a192f000 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb5a192e000 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb5a192d000 arch_prctl(ARCH_SET_FS, 0x7fb5a192e700) = 0 mprotect(0x7fb5a0df3000, 16384, PROT_READ) = 0 mprotect(0x7fb5a107d000, 4096, PROT_READ) = 0 mprotect(0x7fb5a1939000, 4096, PROT_READ) = 0 munmap(0x7fb5a1931000, 21735) = 0 open("test.tif", O_RDONLY) = 3 brk(0) = 0x1395000 brk(0x13b6000) = 0x13b6000 read(3, "II*\0\10\0\0\0", 8) = 8 fstat(3, {st_mode=S_IFREG|0644, st_size=1825656, ...}) = 0 mmap(NULL, 1825656, PROT_READ, MAP_SHARED, 3, 0) = 0x7fb5a176f000 open("/proc/meminfo", O_RDONLY) = 4 fstat(4, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb5a1936000 read(4, "MemTotal: 2090844 kB\nMemF"..., 1024) = 1024 close(4) = 0 munmap(0x7fb5a1936000, 4096) = 0 write(2, "TIFFReadDirectory: ", 19TIFFReadDirectory: ) = 19 write(2, "Warning, ", 9Warning, ) = 9 write(2, "test.tif: wrong data type 7 for "..., 59test.tif: wrong data type 7 for "RichTIFFIPTC"; tag ignored) = 59 write(2, ".\n", 2. ) = 2 gettimeofday({1334836895, 374666}, NULL) = 0 fstat(1, {st_mode=S_IFREG|0664, st_size=0, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb5a1936000 open("/etc/localtime", O_RDONLY) = 4 fstat(4, {st_mode=S_IFREG|0644, st_size=1892, ...}) = 0 fstat(4, {st_mode=S_IFREG|0644, st_size=1892, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fb5a1935000 read(4, "TZif2\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\4\0\0\0\4\0\0\0\0"..., 4096) = 1892 lseek(4, -1217, SEEK_CUR) = 675 read(4, "TZif2\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\6\0\0\0\6\0\0\0\0"..., 4096) = 1217 close(4) = 0 munmap(0x7fb5a1935000, 4096) = 0 write(1, "%!PS-Adobe-3.0 EPSF-3.0\n%%Creato"..., 4096) = 4096 write(1, "fffffffffffffffffffffffffffff\nff"..., 4096) = 4096 write(1, "ffffffffffffffffffff\nfffffffffff"..., 4096) = 4096 write(1, "fffffffffff\nffffffffffffffffffff"..., 4096) = 4096 write(1, "ff\nfffffffffffffffffffffffffffff"..., 4096) = 4096 write(1, "ffffffffffffffffffffffffffffffff"..., 4096) = 4096 write(1, "ffffffffffffffffffffffffffffffff"..., 4096) = 4096 write(1, "ffffffffffffffffffffffffffffffff"..., 4096) = 4096 write(1, "ffffffffffffffffffffffffffffffff"..., 4096) = 4096 write(1, "ffffffffffffffffffffffff\nfffffff"..., 4096) = 4096 Here is an strace output for the piped version: PS generation seems to be much slower when output is piped into ps2pdf13. root@web5:~# strace tiff2ps test.tif | ps2pdf13 -sPAPERSIZE=a4 - > test.pdf execve("/usr/bin/tiff2ps", ["tiff2ps", "test.tif"], [/* 28 vars */]) = 0 brk(0) = 0x1b97000 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9208bb1000 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) open("/etc/ld.so.cache", O_RDONLY) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=21735, ...}) = 0 mmap(NULL, 21735, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f9208bab000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/usr/lib/libtiff.so.4", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0P\200\0\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0644, st_size=405128, ...}) = 0 mmap(NULL, 2501416, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f9208733000 mprotect(0x7f9208794000, 2093056, PROT_NONE) = 0 mmap(0x7f9208993000, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x60000) = 0x7f9208993000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/usr/lib/libjpeg.so.62", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\3408\0\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0644, st_size=145048, ...}) = 0 mmap(NULL, 2240080, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f9208510000 mprotect(0x7f9208533000, 2093056, PROT_NONE) = 0 mmap(0x7f9208732000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x22000) = 0x7f9208732000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/usr/lib/libz.so.1", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\260\"\0\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0644, st_size=93936, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9208baa000 mmap(NULL, 2188976, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f92082f9000 mprotect(0x7f9208310000, 2093056, PROT_NONE) = 0 mmap(0x7f920850f000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x16000) = 0x7f920850f000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/lib/libm.so.6", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\360>\0\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0644, st_size=530736, ...}) = 0 mmap(NULL, 2625768, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f9208077000 mprotect(0x7f92080f7000, 2097152, PROT_NONE) = 0 mmap(0x7f92082f7000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x80000) = 0x7f92082f7000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/lib/libc.so.6", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\240\355\1\0\0\0\0\0"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0755, st_size=1437064, ...}) = 0 mmap(NULL, 3545160, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f9207d15000 mprotect(0x7f9207e6e000, 2093056, PROT_NONE) = 0 mmap(0x7f920806d000, 20480, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x158000) = 0x7f920806d000 mmap(0x7f9208072000, 18504, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f9208072000 close(3) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9208ba9000 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9208ba8000 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9208ba7000 arch_prctl(ARCH_SET_FS, 0x7f9208ba8700) = 0 mprotect(0x7f920806d000, 16384, PROT_READ) = 0 mprotect(0x7f92082f7000, 4096, PROT_READ) = 0 mprotect(0x7f9208bb3000, 4096, PROT_READ) = 0 munmap(0x7f9208bab000, 21735) = 0 open("test.tif", O_RDONLY) = 3 brk(0) = 0x1b97000 brk(0x1bb8000) = 0x1bb8000 read(3, "II*\0\10\0\0\0", 8) = 8 fstat(3, {st_mode=S_IFREG|0644, st_size=1825656, ...}) = 0 mmap(NULL, 1825656, PROT_READ, MAP_SHARED, 3, 0) = 0x7f92089e9000 open("/proc/meminfo", O_RDONLY) = 4 fstat(4, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9208bb0000 read(4, "MemTotal: 2090844 kB\nMemF"..., 1024) = 1024 close(4) = 0 munmap(0x7f9208bb0000, 4096) = 0 write(2, "TIFFReadDirectory: ", 19TIFFReadDirectory: ) = 19 write(2, "Warning, ", 9Warning, ) = 9 write(2, "test.tif: wrong data type 7 for "..., 59test.tif: wrong data type 7 for "RichTIFFIPTC"; tag ignored) = 59 write(2, ".\n", 2. ) = 2 gettimeofday({1334836513, 114140}, NULL) = 0 fstat(1, {st_mode=S_IFIFO|0600, st_size=0, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9208bb0000 open("/etc/localtime", O_RDONLY) = 4 fstat(4, {st_mode=S_IFREG|0644, st_size=1892, ...}) = 0 fstat(4, {st_mode=S_IFREG|0644, st_size=1892, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9208baf000 read(4, "TZif2\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\4\0\0\0\4\0\0\0\0"..., 4096) = 1892 lseek(4, -1217, SEEK_CUR) = 675 read(4, "TZif2\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\6\0\0\0\6\0\0\0\0"..., 4096) = 1217 close(4) = 0 munmap(0x7f9208baf000, 4096) = 0 write(1, "%!PS-Adobe-3.0 EPSF-3.0\n%%Creato"..., 4096) = 4096 write(1, "fffffffffffffffffffffffffffff\nff"..., 4096) = 4096 write(1, "ffffffffffffffffffff\nfffffffffff"..., 4096) = 4096 write(1, "fffffffffff\nffffffffffffffffffff"..., 4096) = 4096 write(1, "ff\nfffffffffffffffffffffffffffff"..., 4096) = 4096 write(1, "ffffffffffffffffffffffffffffffff"..., 4096) = 4096 write(1, "ffffffffffffffffffffffffffffffff"..., 4096) = 4096 write(1, "ffffffffffffffffffffffffffffffff"..., 4096) = 4096 write(1, "ffffffffffffffffffffffffffffffff"..., 4096) = 4096 ...etc...

    Read the article

  • SQL SERVER – Windows File/Folder and Share Permissions – Notes from the Field #029

    - by Pinal Dave
    [Note from Pinal]: This is a 29th episode of Notes from the Field series. Security is the task which we should give it to the experts. If there is a small overlook or misstep, there are good chances that security of the organization is compromised. This is very true, but there are always devils’s advocates who believe everyone should know the security. As a DBA and Administrator, I often see people not taking interest in the Windows Security hiding behind the reason of not expert of Windows Server. We all often miss the important mission statement for the success of any organization – Teamwork. In this blog post Brian tells the story in very interesting lucid language. Read On! In this episode of the Notes from the Field series database expert Brian Kelley explains a very crucial issue DBAs and Developer faces on their production server. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of Brian in his own words. When I talk security among database professionals, I find that most have at least a working knowledge of how to apply security within a database. When I talk with DBAs in particular, I find that most have at least a working knowledge of security at the server level if we’re speaking of SQL Server. One area I see continually that is weak is in the area of Windows file/folder (NTFS) and share permissions. The typical response is, “I’m a database developer and the Windows system administrator is responsible for that.” That may very well be true – the system administrator may have the primary responsibility and accountability for file/folder and share security for the server. However, if you’re involved in the typical activities surrounding databases and moving data around, you should know these permissions, too. Otherwise, you could be setting yourself up where someone is able to get to data he or she shouldn’t, or you could be opening the door where human error puts bad data in your production system. File/Folder Permission Basics: I wrote about file/folder permissions a few years ago to give the basic permissions that are most often seen. Here’s what you must know as a minimum at the file/folder level: Read - Allows you to read the contents of the file or folder. Having read permissions allows you to copy the file or folder. Write  – Again, as the name implies, it allows you to write to the file or folder. This doesn’t include the ability to delete, however, nothing stops a person with this access from writing an empty file. Delete - Allows the file/folder to be deleted. If you overwrite files, you may need this permission. Modify - Allows read, write, and delete. Full Control - Same as modify + the ability to assign permissions. File/Folder permissions aggregate, unless there is a DENY (where it trumps, just like within SQL Server), meaning if a person is in one group that gives Read and antoher group that gives Write, that person has both Read and Write permissions. As you might expect me to say, always apply the Principle of Least Privilege. This likely means that any additional permission you might add does not need Full Control. Share Permission Basics: At the share level, here are the permissions. Read - Allows you to read the contents on the share. Change - Allows you to read, write, and delete contents on the share. Full control - Change + the ability to modify permissions. Like with file/folder permissions, these permissions aggregate, and DENY trumps. So What Access Does a Person / Process Have? Figuring out what someone or some process has depends on how the location is being accessed: Access comes through the share (\\ServerName\Share) – a combination of permissions is considered. Access is through a drive letter (C:\, E:\, S:\, etc.) – only the file/folder permissions are considered. The only complicated one here is access through the share. Here’s what Windows does: Figures out what the aggregated permissions are at the file/folder level. Figures out what the aggregated permissions are at the share level. Takes the most restrictive of the two sets of permissions. You can test this by granting Full Control over a folder (this is likely already in place for the Users local group) and then setting up a share. Give only Read access through the share, and that includes to Administrators (if you’re creating a share, likely you have membership in the Administrators group). Try to read a file through the share. Now try to modify it. The most restrictive permission is the Share level permissions. It’s set to only allow Read. Therefore, if you come through the share, it’s the most restrictive. Does This Knowledge Really Help Me? In my experience, it does. I’ve seen cases where sensitive files were accessible by every authenticated user through a share. Auditors, as you might expect, have a real problem with that. I’ve also seen cases where files to be imported as part of the nightly processing were overwritten by files intended from development. And I’ve seen cases where a process can’t get to the files it needs for a process because someone changed the permissions. If you know file/folder and share permissions, you can spot and correct these types of security flaws. Given that there are a lot of database professionals that don’t understand these permissions, if you know it, you set yourself apart. And if you’re able to help on critical processes, you begin to set yourself up as a linchpin (link to .pdf) for your organization. If you want to get started with performance tuning and database security with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Query Logging in Analysis Services

    - by MikeD
    On a project I work on, we capture the queries that get executed on our Analysis Services instance (SQL Server 2008 R2) and use the table for helping us to build aggregations and also we aggregate the query log daily into a data warehouse of operational data so we can track usage of our Analysis databases by users over time. We've learned a couple of helpful things about this logging that I'd like to share here.First off, the query log table automatically gets cleaned out by SSAS under a few conditions - schema changes to the analysis database and even regular data and aggregation processing can delete rows in the table. We like to keep these logs longer than that, so we have a trigger on the table that copies all rows into another table with the same structure:Here is our trigger code:CREATE TRIGGER [dbo].[SaveQueryLog] on [dbo].[OlapQueryLog] AFTER INSERT AS       INSERT INTO dbo.[OlapQueryLog_History] (MSOLAP_Database, MSOLAP_ObjectPath, MSOLAP_User, Dataset, StartTime, Duration)      SELECT MSOLAP_Database, MSOLAP_ObjectPath, MSOLAP_User, Dataset, StartTime, Duration FROM inserted Second, the query logging process is "best effort" - if SSAS cannot connect to the database listed in the QueryLogConnectionString in the Analysis Server properties, it just stops logging - it doesn't generate any errors to the client at all, which is a good thing. Once it stops logging, it doesn't retry later - an hour, a day, a week, or even a month later, so long as the service doesn't restart.That has burned us a couple of times, when we have made changes to the service account that is used for SSAS, and that account doesn't have access to the database we want to log to. The last time this happened, we noticed a while later that no logging was taking place, and I determined that the service account didn't have sufficient permissions, so I made the necessary changes to give that service account access to the logging database. I first tried just the db_datawriter role and that wasn't enough, so I granted the service account membership in the db_owner role. Yes, that's a much bigger set of permissions, but I didn't want to search out the specific permissions at the time. Once I determined that the service account had the appropriate permissions, I wanted to get query logging restarted from SSAS, and I wondered how to do that? Having just used a larger hammer than necessary with the db_owner role membership, I considered just restarting SSAS to get it logging again. However, this was a production server, and it was in the middle of business hours, and there were active users connecting to that SSAS instance, so I thought better of it.As I considered the options, I remembered that the first time I set up query logging, by putting in a valid connection string to the QueryLogConnectionString server property, logging started immediately after I saved the properties. I wondered if I could make some other change to the connection string so that the query logging would start again without restarting the service. I went into the connection string dialog, went to the All page, and looked at the properties I could change that wouldn't affect the actual connection. Aha! The Application Name property would do just nicely - I set it to "SSAS Query Logging" (it was previously blank) and saved the changes to the server properties. And the query logging started up right away. If I need to get this running again in the future, I could just make a small change in the Application Name property again, save it, and even change it back again if I wanted to.The other nice side effect of setting the Application Name property is that now I can see (and possibly filter for or filter out) the SQL activity in that database that is related to the query logging process in Profiler:  To sum up:The SSAS Query Logging process will automatically delete rows from the QueryLog table, so if you want to keep them longer, put a trigger on the table to copy the rows to another tableThe SSAS service account requires more than db_datawriter role membership (and probably less than db_owner) in the database specified in the QueryLogConnectionString server property to successfully insert log rows to the QueryLog  table.Query logging will stop quietly whenever it encounters an error. Make a change to the QueryLogConnectionString server property (such as the Application Name attribute) to get query logging to restart and you won't have to restart the service.

    Read the article

  • SQL SERVER – Merge Operations – Insert, Update, Delete in Single Execution

    - by pinaldave
    This blog post is written in response to T-SQL Tuesday hosted by Jorge Segarra (aka SQLChicken). I have been very active using these Merge operations in my development. However, I have found out from my consultancy work and friends that these amazing operations are not utilized by them most of the time. Here is my attempt to bring the necessity of using the Merge Operation to surface one more time. MERGE is a new feature that provides an efficient way to do multiple DML operations. In earlier versions of SQL Server, we had to write separate statements to INSERT, UPDATE, or DELETE data based on certain conditions; however, at present, by using the MERGE statement, we can include the logic of such data changes in one statement that even checks when the data is matched and then just update it, and similarly, when the data is unmatched, it is inserted. One of the most important advantages of MERGE statement is that the entire data are read and processed only once. In earlier versions, three different statements had to be written to process three different activities (INSERT, UPDATE or DELETE); however, by using MERGE statement, all the update activities can be done in one pass of database table. I have written about these Merge Operations earlier in my blog post over here SQL SERVER – 2008 – Introduction to Merge Statement – One Statement for INSERT, UPDATE, DELETE. I was asked by one of the readers that how do we know that this operator was doing everything in single pass and was not calling this Merge Operator multiple times. Let us run the same example which I have used earlier; I am listing the same here again for convenience. --Let’s create Student Details and StudentTotalMarks and inserted some records. USE tempdb GO CREATE TABLE StudentDetails ( StudentID INTEGER PRIMARY KEY, StudentName VARCHAR(15) ) GO INSERT INTO StudentDetails VALUES(1,'SMITH') INSERT INTO StudentDetails VALUES(2,'ALLEN') INSERT INTO StudentDetails VALUES(3,'JONES') INSERT INTO StudentDetails VALUES(4,'MARTIN') INSERT INTO StudentDetails VALUES(5,'JAMES') GO CREATE TABLE StudentTotalMarks ( StudentID INTEGER REFERENCES StudentDetails, StudentMarks INTEGER ) GO INSERT INTO StudentTotalMarks VALUES(1,230) INSERT INTO StudentTotalMarks VALUES(2,255) INSERT INTO StudentTotalMarks VALUES(3,200) GO -- Select from Table SELECT * FROM StudentDetails GO SELECT * FROM StudentTotalMarks GO -- Merge Statement MERGE StudentTotalMarks AS stm USING (SELECT StudentID,StudentName FROM StudentDetails) AS sd ON stm.StudentID = sd.StudentID WHEN MATCHED AND stm.StudentMarks > 250 THEN DELETE WHEN MATCHED THEN UPDATE SET stm.StudentMarks = stm.StudentMarks + 25 WHEN NOT MATCHED THEN INSERT(StudentID,StudentMarks) VALUES(sd.StudentID,25); GO -- Select from Table SELECT * FROM StudentDetails GO SELECT * FROM StudentTotalMarks GO -- Clean up DROP TABLE StudentDetails GO DROP TABLE StudentTotalMarks GO The Merge Join performs very well and the following result is obtained. Let us check the execution plan for the merge operator. You can click on following image to enlarge it. Let us evaluate the execution plan for the Table Merge Operator only. We can clearly see that the Number of Executions property suggests value 1. Which is quite clear that in a single PASS, the Merge Operation completes the operations of Insert, Update and Delete. I strongly suggest you all to use this operation, if possible, in your development. I have seen this operation implemented in many data warehousing applications. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Joins, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Merge

    Read the article

  • Infragistics UltraWebTab

    - by user354089
    Im using Infragistcs UltraWebTab. The code is shown below ` <div class="tab-content"> <asp:Panel ID="PnlGeneral" runat="server"> <table width="100%" border="0" cellspacing="0" cellpadding="0" class="tab-list"> <tr> <td style="border-bottom-color: White"> <asp:Label ID="LblErrors" runat="server" CssClass="ErrorMessage1"></asp:Label> <asp:Label ID="LblSuccessMsg" runat="server" CssClass="SuccessMessage1"></asp:Label> </td> </tr> <tr> <td> <table cellpadding="0" cellspacing="0" width="100%" border="0" class="tab-list"> <tr> <th width="205" class="FormLabel1"> Campaign Name <span class="ErrorMessage">*</span> </th> <td width="80%"> <asp:TextBox ID="TxtCampaignName" runat="server" CssClass="TextBox1"></asp:TextBox> </td> </tr> <tr> <th width="205" class="FormLabel1"> CRM Name <span class="ErrorMessage">*</span> </th> <td width="80%"> <asp:TextBox ID="TxtCRMName" runat="server" CssClass="TextBox1"></asp:TextBox> </td> </tr> <tr> <th class="FormLabel1"> Campaign Type <span class="ErrorMessage">*</span> </th> <td> <asp:DropDownList ID="DDLCampaignType" runat="server" CssClass="TextBox1" AutoPostBack="true" Width="117px" OnSelectedIndexChanged="DDLCampaignType_SelectedIndexChanged"> </asp:DropDownList> </td> </tr> <tr visible="false" id="trCompanyRow" runat="server"> <th class="FormLabel1"> Company <span class="ErrorMessage">*</span> </th> <td> <table cellpadding="0" cellspacing="0" border="0" width="100%"> <tr> <td class="style2" style="border-bottom-color: White"> <asp:DropDownList ID="DDLCompany" runat="server" CssClass="TextBox1" Width="117px" OnSelectedIndexChanged="DDLCompany_SelectedIndexChanged"> </asp:DropDownList> </td> <td class="style3" style="border-bottom-color: White"> <asp:LinkButton ID="btnlnknewCompany" runat="server" Style="font-size: 100%;" Text="Add New" OnClick="btnlnknewCompany_Click"></asp:LinkButton> </td> <td style="border-bottom-color: White"> <table cellpadding="0" cellspacing="0" border="0" width="100%" id="tdNewComapny" visible="false" runat="server"> <tr> <td class="style4" style="border-bottom-color: White"> <asp:TextBox ID="txtCompanyName" runat="server"></asp:TextBox> </td> <td style="border-bottom-color: White"> <asp:Button ID="btnCompanyAdd" runat="server" CssClass="btn1" Height="20px" Text="Add" Width="25%" OnClick="btnCompanyAdd_Click" /> </td> </tr> </table> </td> </tr> </table> </td> </tr> <tr id="trBannerImage" runat="server" visible="false"> <th class="FormLabel1"> Banner Image <span class="ErrorMessage">*</span> </th> <td> <asp:FileUpload ID="FileUploadBannerImage" runat="server" ToolTip="Add images for banner" /> </td> </tr> <tr> <th class="FormLabel1"> Start Date <span class="ErrorMessage">*</span> </th> <td> <asp:TextBox ID="TxtStartDate" runat="server" CssClass="TextBox1"></asp:TextBox><rjs:PopCalendar ID="CalStartDate" runat="server" Control="TxtStartDate" Format="dd mm yyyy" ShowErrorMessage="false" /> &nbsp;dd-mm-yyyy </td> </tr> <tr> <th class="FormLabel1"> End Date <span class="ErrorMessage">*</span> </th> <td> <asp:TextBox ID="TxtEndDate" runat="server" CssClass="TextBox1"></asp:TextBox><rjs:PopCalendar ID="CalEndDate" runat="server" Control="TxtEndDate" Format="dd mm yyyy" ShowErrorMessage="false" /> &nbsp;dd-mm-yyyy </td> </tr> <tr> <th class="FormLabel1"> Enabled? </th> <td> <asp:CheckBox ID="ChkEnabled" runat="server" /> </td> </tr> <tr style="border-bottom-color: White" id="tblVerificationFields" visible="false" runat="server"> <th style="border-bottom-color: White"> Company's Verification Fields </th> <td style="border-bottom-color: White"> <table border="0" cellspacing="0" cellpadding="0" class="tab-form" width="100%"> <tr> <td colspan="3" align="center"> <br /> <p> <label style="font-size: 14px; font-weight: bold; text-align: center;"> Select from existing verification fields below</label></p> </td> </tr> <tr> <td colspan="2"> <asp:Repeater ID="RptrVeriFieldsParamType" runat="server"> <HeaderTemplate> <table border="0" cellpadding="0" cellspacing="0" class="tab-grid" style="border: 0px"> <tr> <th> </th> <th> Field Name </th> <th> Type </th> <th> </th> </tr> </HeaderTemplate> <ItemTemplate> <tr> <td style="border-bottom-color: White"> <asp:CheckBox ID="RptrChkVeriFields" runat="server" /> </td> <td style="border-bottom-color: White"> <asp:Label ID="RptrFieldName" runat="server" Text='<%# Eval("FieldName") %>'> </asp:Label> </td> <td style="border-bottom-color: White"> <asp:Label ID="RptrParamterTypeName" runat="server" Text='<%# Eval("PARAMETERTYPENAME") %>'> </asp:Label> </td> <td> <asp:Label ID="RptrHdnFieldId" runat="server" Text='<%# Eval("FIELDID") %>' Visible="false"></asp:Label> </td> </tr> </ItemTemplate> <FooterTemplate> </table> </FooterTemplate> </asp:Repeater> </td> </tr> <tr> <td style="border-bottom-color: White"> </td> </tr> <tr> <td> <table border="0" cellpadding="0" cellspacing="0" width="100%" runat="server"> <tr> <td colspan="6" style="border-bottom-color: White" align="center"> <br /> <p> <label style="font-size: 14px; font-weight: bold; text-align: center; width: 100%;"> Or Add New verification field</label> </p> </td> </tr> <tr id="trVerifcationFields" runat="server" visible="false"> <th style="border-bottom-color: White" width="110px"> <strong>Verification Name</strong> </th> <td style="border-bottom-color: White" width="50px"> <asp:TextBox ID="TxtVeriField" runat="server"> </asp:TextBox> </td> <th style="border-bottom-color: White" width="100px"> <strong>Parameter Type</strong> </th> <td style="border-bottom-color: White" width="100px"> <asp:DropDownList ID="DDLParameterType" runat="server"> </asp:DropDownList> </td> <th> </th> <td align="left" style="border-bottom-color: White"> <asp:Button ID="BtnAddVeriField" runat="server" CssClass="btn1" Height="20px" Text="Add" OnClick="BtnAddVeriField_Click" Width="75%" /> </td> </tr> </table> </td> </tr> </table> </td> </tr> </table> </td> </tr> <tr> <td> <table cellpadding="0" cellspacing="0" width="100%" border="0" class="tab-list"> <tr align="right"> <td> </td> <td align="right" class="tab-list"> <asp:Button runat="server" ID="Next" Visible="true" Text="Next >" CssClass="btn" /> </td> </tr> </table> </td> </tr> </table> </asp:Panel> </div> </ContentTemplate> </igtab:Tab> <igtab:Tab Text="CRM Deals (Step-2)" Key="Tab2"> <ContentTemplate> <div style="clear: both"> </div> <div class="tab-content"> <asp:Panel ID="PnlCRMDeals" runat="server" ScrollBars="Vertical" Height="500px"> <table width="100%" border="0" cellspacing="0" cellpadding="0" class="tab-list"> <tr> <td> <asp:GridView ID="GridDeals" AutoGenerateColumns="False" runat="server" BorderStyle="none" BorderWidth="0" CellPadding="0" CellSpacing="0" GridLines="None" ShowFooter="false" HorizontalAlign="Left" CssClass="tab-grid" Width="100%"> <HeaderStyle CssClass="header" HorizontalAlign="Center" /> <PagerStyle CssClass="pager" /> <AlternatingRowStyle CssClass="odd" /> <Columns> <asp:TemplateField> <ItemStyle Width="5%" /> <ItemTemplate> <asp:CheckBox ID="ChkDeals" runat="server" Visible="true" /> </ItemTemplate> </asp:TemplateField> <asp:TemplateField HeaderText="Deal Name"> <ItemStyle Width="25%" /> <ItemTemplate> <asp:Label ID="DealName" runat="server" Text='<%# Eval("DealName") %>' /> </ItemTemplate> </asp:TemplateField> <asp:TemplateField HeaderText="Name in CRM"> <ItemTemplate> <asp:Label ID="CRMDealName" runat="server" Text='<%# Eval("CRM_NAME") %>' /> </ItemTemplate> </asp:TemplateField> <asp:TemplateField HeaderText="Deal Code"> <ItemTemplate> <asp:Label ID="PartNum" runat="server" Text='<%# Eval("PARTNUM") %>' /> </ItemTemplate> </asp:TemplateField> </Columns> </asp:GridView> </td> </tr> </table> </asp:Panel> </div> </ContentTemplate> </igtab:Tab> ` The problem im facing is after the "BtnAddVeriField" add button is clicked the Panel for the next tab gets displayed below the first tab's Panel. Furthermore, that Add button is not displayed as well.

    Read the article

  • Hibernate exception

    - by Mark
    Hi all, im new to hibernate! i have followed the netbeans tutorial on creating a hibernate enabled application. after sucessfully creating a database in mysql workbench i reversed engineered the pojos etc and then tried to run a simple query(from Course) and got the following org.hibernate.MappingException: An association from the table coursemodule refers to an unmapped class: DAL.Module at org.hibernate.cfg.Configuration.secondPassCompileForeignKeys(Configuration.java:1252) at org.hibernate.cfg.Configuration.secondPassCompile(Configuration.java:1170) at org.hibernate.cfg.AnnotationConfiguration.secondPassCompile(AnnotationConfiguration.java:324) at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1286) at org.hibernate.cfg.AnnotationConfiguration.buildSessionFactory(AnnotationConfiguration.java:859) heres the generated class for Course package DAL; // Generated 02-May-2010 16:41:16 by Hibernate Tools 3.2.1.GA import java.util.HashSet; import java.util.Set; /** * Course generated by hbm2java */ public class Course implements java.io.Serializable { private int id; private String name; private Set<Module> modules = new HashSet<Module>(0); public Course() { } public Course(int id, String name) { this.id = id; this.name = name; } public Course(int id, String name, Set<Module> modules) { this.id = id; this.name = name; this.modules = modules; } public int getId() { return this.id; } public void setId(int id) { this.id = id; } public String getName() { return this.name; } public void setName(String name) { this.name = name; } public Set<Module> getModules() { return this.modules; } public void setModules(Set<Module> modules) { this.modules = modules; } } and its config file course.hbm.xml <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <!-- Generated 02-May-2010 16:41:16 by Hibernate Tools 3.2.1.GA --> <hibernate-mapping> <class name="DAL.Course" table="course" catalog="walkthrough"> <id name="id" type="int"> <column name="id" /> <generator class="assigned" /> </id> <property name="name" type="string"> <column name="name" not-null="true" /> </property> <set name="modules" inverse="false" table="coursemodule"> <key> <column name="courseId" not-null="true" unique="true" /> </key> <many-to-many entity-name="DAL.Module"> <column name="moduleId" not-null="true" unique="true" /> </many-to-many> </set> </class> </hibernate-mapping> hibernate.reveng.xml <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE hibernate-reverse-engineering PUBLIC "-//Hibernate/Hibernate Reverse Engineering DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-reverse-engineering-3.0.dtd"> <hibernate-reverse-engineering> <schema-selection match-catalog="Walkthrough"/> <table-filter match-name="walkthrough"/> <table-filter match-name="course"/> <table-filter match-name="module"/> <table-filter match-name="studentmodule"/> <table-filter match-name="attendee"/> <table-filter match-name="student"/> <table-filter match-name="coursemodule"/> <table-filter match-name="session"/> <table-filter match-name="test"/> </hibernate-reverse-engineering> hibernate.cfg.xml <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate/Hibernate Configuration DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd"> <hibernate-configuration> <session-factory> <property name="hibernate.dialect">org.hibernate.dialect.MySQLDialect</property> <property name="hibernate.connection.driver_class">com.mysql.jdbc.Driver</property> <property name="hibernate.connection.url">jdbc:mysql://localhost:3306/Walkthrough</property> <property name="hibernate.connection.username">root</property> <property name="hibernate.connection.password">password</property> <property name="hibernate.show_sql">true</property> <property name="hibernate.current_session_context_class">thread</property> <mapping resource="DAL/Student.hbm.xml"/> <mapping resource="DAL/Walkthrough.hbm.xml"/> <mapping resource="DAL/Test.hbm.xml"/> <mapping resource="DAL/Module.hbm.xml"/> <mapping resource="DAL/Session.hbm.xml"/> <mapping resource="DAL/Course.hbm.xml"/> </session-factory> </hibernate-configuration> any ideas on why im getting this exception? ps. test is just a table with an id in it and is not related to anything. running "from Test" works

    Read the article

  • Warehouse Management per Endeca: disponibili i video su Youtube

    - by Claudia Caramelli-Oracle
    12.00 Il team di gestione del prodotto WMS ha registrato quattro video sulle estensioni Warehouse Management per Endeca – il programma che gestisce in tempo reale le operazioni di magazzino. Quasi un'ora di contenuti che copre: Introduzione alle estensioni WMS per Endeca Plan and Track Fulfillment Space Utilization Labor Utilization Tutti e quattro i video possono essere trovati cliccando qui. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} 12.00 Normal 0 14 false false false IT X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} 12.00 Normal 0 14 false false false IT X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} 12.00 Normal 0 14 false false false IT X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Normal 0 14 false false false IT X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • Reducing Deadlocks - not a DBA issue ?

    - by steveh99999
     As a DBA, I'm involved on an almost daily basis troubleshooting 'SQL Server' performance issues. Often, this troubleshooting soon veers away from a 'its a SQL Server issue' to instead become a wider application/database design/coding issue.One common perception with SQL Server is that deadlocking is an application design issue - and is fixed by recoding...  I see this reinforced by MCP-type questions/scenarios where the answer to prevent deadlocking is simply to change the order in code in which tables are accessed....Whilst this is correct, I do think this has led to a situation where many 'operational' or 'production support' DBAs, when faced with a deadlock, are happy to throw the issue over to developers without analysing the issue further....A couple of 'war stories' on deadlocks which I think are interesting :- Case One , I had an issue recently on a third-party application that I support on SQL 2008.  This particular third-party application has an unusual support agreement where the customer is allowed to change the index design on the third-party provided database.  However, we are not allowed to alter application code or modify table structure..This third-party application is also known to encounter occasional deadlocks – indeed, I have documentation from the vendor that up to 50 deadlocks per day is not unusual !So, as a DBA I have to support an application which in my opinion has too many deadlocks - but, I cannot influence the design of the tables or stored procedures for the application. This should be the classic - blame the third-party developers scenario, and hope this issue gets addressed in a future application release - ie we could wait years for this to be resolved and implemented in our production environment...But, as DBAs  can change the index layout, is there anything I could do still to reduce the deadlocks in the application ?I initially used SQL traceflag 1222 to write deadlock detection output to the SQL Errorlog – using this I was able to identify one table heavily involved in the deadlocks.When I examined the table definition, I was surprised to see it was a heap – ie no clustered index existed on the table.Using SQL profiler to see locking behaviour and plan for the query involved in the deadlock, I was able to confirm a table scan was being performed.By creating an appropriate clustered index - it was possible to produce a more efficient plan and locking behaviour.So, less locks, held for less time = less possibility of deadlocks. I'm still unhappy about the overall number of deadlocks on this system - but that's something to be discussed further with the vendor.Case Two,  a system which hadn't changed for months suddenly started seeing deadlocks on a regular basis. I love the 'nothing's changed' scenario, as it gives me the opportunity to appear wise and say 'nothings changed on this system, except the data'.. This particular deadlock occurred on a table which had been growing rapidly. By using DBCC SHOW_STATISTICS - the DBA team were able to see that the deadlocks seemed to be occurring shortly after auto-update stats had regenerated the table statistics using it's default sampling behaviour.As a quick fix, we were able to schedule a nightly UPDATE STATISTICS WITH FULLSCAN on the table involved in the deadlock - thus, greatly reducing the potential for stats to be updated via auto_update_stats, consequently reducing the potential for a bad plan to be generated based on an unrepresentative sample of the data. This reduced the possibility of a deadlock occurring.  Not a perfect solution by any means, but quick, easy to implement, and needed no application code changes. This fix gave us some 'breathing space'  to properly fix the code during the next scheduled application release.   The moral of this post - don't dismiss deadlocks as issues that can only be fixed by developers...

    Read the article

  • ZFS Storage Appliance ? ldap ??????

    - by user13138569
    ZFS Storage Appliance ? Openldap ????????? ???ldap ?????????????? Solaris 11 ? Openldap ????????????? ??? slapd.conf ??ldif ?????????? user01 ??????? ?????? slapd.conf # # See slapd.conf(5) for details on configuration options. # This file should NOT be world readable. # include /etc/openldap/schema/core.schema include /etc/openldap/schema/cosine.schema include /etc/openldap/schema/nis.schema # Define global ACLs to disable default read access. # Do not enable referrals until AFTER you have a working directory # service AND an understanding of referrals. #referral ldap://root.openldap.org pidfile /var/openldap/run/slapd.pid argsfile /var/openldap/run/slapd.args # Load dynamic backend modules: modulepath /usr/lib/openldap moduleload back_bdb.la # moduleload back_hdb.la # moduleload back_ldap.la # Sample security restrictions # Require integrity protection (prevent hijacking) # Require 112-bit (3DES or better) encryption for updates # Require 63-bit encryption for simple bind # security ssf=1 update_ssf=112 simple_bind=64 # Sample access control policy: # Root DSE: allow anyone to read it # Subschema (sub)entry DSE: allow anyone to read it # Other DSEs: # Allow self write access # Allow authenticated users read access # Allow anonymous users to authenticate # Directives needed to implement policy: # access to dn.base="" by * read # access to dn.base="cn=Subschema" by * read # access to * # by self write # by users read # by anonymous auth # # if no access controls are present, the default policy # allows anyone and everyone to read anything but restricts # updates to rootdn. (e.g., "access to * by * read") # # rootdn can always read and write EVERYTHING! ####################################################################### # BDB database definitions ####################################################################### database bdb suffix "dc=oracle,dc=com" rootdn "cn=Manager,dc=oracle,dc=com" # Cleartext passwords, especially for the rootdn, should # be avoid. See slappasswd(8) and slapd.conf(5) for details. # Use of strong authentication encouraged. rootpw secret # The database directory MUST exist prior to running slapd AND # should only be accessible by the slapd and slap tools. # Mode 700 recommended. directory /var/openldap/openldap-data # Indices to maintain index objectClass eq ?????????ldif???? dn: dc=oracle,dc=com objectClass: dcObject objectClass: organization dc: oracle o: oracle dn: cn=Manager,dc=oracle,dc=com objectClass: organizationalRole cn: Manager dn: ou=People,dc=oracle,dc=com objectClass: organizationalUnit ou: People dn: ou=Group,dc=oracle,dc=com objectClass: organizationalUnit ou: Group dn: uid=user01,ou=People,dc=oracle,dc=com uid: user01 objectClass: top objectClass: account objectClass: posixAccount objectClass: shadowAccount cn: user01 uidNumber: 10001 gidNumber: 10000 homeDirectory: /home/user01 userPassword: secret loginShell: /bin/bash shadowLastChange: 10000 shadowMin: 0 shadowMax: 99999 shadowWarning: 14 shadowInactive: 99999 shadowExpire: -1 ldap?????????????ZFS Storage Appliance??????? Configuration SERVICES LDAP ??Base search DN ?ldap??????????? ???? ldap ????????? user01 ???????????????? ???????????? user ????????? Unknown or invalid user ?????????????????? ????????????????Solaris 11 ???????????? ????????????? ldap ????????getent ??????????????? # svcadm enable svc:/network/nis/domain:default # svcadm enable ldap/client # ldapclient manual -a authenticationMethod=none -a defaultSearchBase=dc=oracle,dc=com -a defaultServerList=192.168.56.201 System successfully configured # getent passwd user01 user01:x:10001:10000::/home/user01:/bin/bash ????????? user01 ?????????????? # mount -F nfs -o vers=3 192.168.56.101:/export/user01 /mnt # su user01 bash-4.1$ cd /mnt bash-4.1$ touch aaa bash-4.1$ ls -l total 1 -rw-r--r-- 1 user01 10000 0 May 31 04:32 aaa ?????? ldap ??????????????????????????!

    Read the article

  • SSAS: Using fake dimension and scopes for dynamic ranges

    - by DigiMortal
    In one of my BI projects I needed to find count of objects in income range. Usual solution with range dimension was useless because range where object belongs changes in time. These ranges depend on calculation that is done over incomes measure so I had really no option to use some classic solution. Thanks to SSAS forums I got my problem solved and here is the solution. The problem – how to create dynamic ranges? I have two dimensions in SSAS cube: one for invoices related to objects rent and the other for objects. There is measure that sums invoice totals and two calculations. One of these calculations performs some computations based on object income and some other object attributes. Second calculation uses first one to define income ranges where object belongs. What I need is query that returns me how much objects there are in each group. I cannot use dimension for range because on one date object may belong to one range and two days later to another income range. By example, if object is not rented out for two days it makes no money and it’s income stays the same as before. If object is rented out after two days it makes some income and this income may move it to another income range. Solution – fake dimension and scopes Thanks to Gerhard Brueckl from pmOne I got everything work fine after some struggling with BI Studio. The original discussion he pointed out can be found from SSAS official forums thread Create a banding dimension that groups by a calculated measure. Solution was pretty simple by nature – we have to define fake dimension for our range and use scopes to assign values for object count measure. Object count measure is primitive – it just counts objects and that’s it. We will use it to find out how many objects belong to one or another range. We also need table for fake ranges and we have to fill it with ranges used in ranges calculation. After creating the table and filling it with ranges we can add fake range dimension to our cube. Let’s see now how to solve the problem step-by-step. Solving the problem Suppose you have ranges calculation defined like this: CASE WHEN [Measures].[ComplexCalc] < 0 THEN 'Below 0'WHEN [Measures].[ComplexCalc] >=0 AND  [Measures].[ComplexCalc] <=50 THEN '0 - 50'...END Let’s create now new table to our analysis database and name it as FakeIncomeRange. Here is the definition for table: CREATE TABLE [FakeIncomeRange] (     [range_id] [int] IDENTITY(1,1) NOT NULL,     [range_name] [nvarchar](50) NOT NULL,     CONSTRAINT [pk_fake_income_range] PRIMARY KEY CLUSTERED      (         [range_id] ASC     ) ) Don’t forget to fill this table with range labels you are using in ranges calculation. To use ranges from table we have to add this table to our data source view and create new dimension. We cannot bind this table to other tables but we have to leave it like it is. Our dimension has two attributes: ID and Name. The next thing to create is calculation that returns objects count. This calculation is also fake because we override it’s values for all ranges later. Objects count measure can be defined as calculation like this: COUNT([Object].[Object].[Object].members) Now comes the most crucial part of our solution – defining the scopes. Based on data used in this posting we have to define scope for each of our ranges. Here is the example for first range. SCOPE([FakeIncomeRange].[Name].&[Below 0], [Measures].[ObjectCount])     This=COUNT(            FILTER(                [Object].[Object].[Object].members,                 [Measures].[ComplexCalc] < 0          )     ) END SCOPE To get these scopes defined in cube we need MDX script blocks for each line given here. Take a look at the screenshot to get better idea what I mean. This example is given from SQL Server books online to avoid conflicts with NDA. :) From previous example the lines (MDX scripts) are: Line starting with SCOPE Block for This = Line with END SCOPE And now it is time to deploy and process our cube. Although you may see examples where there are semicolons in the end of statements you don’t need them. Visual Studio BI tools generate separate command from each script block so you don’t need to worry about it.

    Read the article

  • MSSQL: Copying data from one database to another

    - by DigiMortal
    I have database that has data imported from another server using import and export wizard of SQL Server Management Studio. There is also empty database with same tables but it also has primary keys, foreign keys and indexes. How to get data from first database to another? Here is the description of my crusade. And believe me – it is not nice one. Bugs in import and export wizard There is some awful bugs in import and export wizard that makes data imports and exports possible only on very limited manner: wizard is not able to analyze foreign keys, wizard wants to create tables always, whatever you say in settings. The result is faulty and useless package. Now let’s go step by step and make things work in our scenario. Database There are two databases. Let’s name them like this: PLAIN – contains data imported from remote server (no indexes, no keys, no nothing, just plain dumb data) CORRECT – empty database with same structure as remote database (indexes, keys and everything else but no data) Our goal is to get data from PLAIN to CORRECT. 1. Create import and export package In this point we will create faulty SSIS package using SQL Server Management Studio. Run import and export wizard and let it create SSIS package that reads data from CORRECT and writes it to, let’s say, CORRECT-2. Make sure you enable identity insert. Make sure there are no views selected. Make sure you don’t let package to create tables (you can miss this step because it wants to create tables anyway). Save package to SSIS. 2. Modify import and export package Now let’s clean up the package and remove all faulty crap. Connect SQL Server Management Studio to SSIS instance. Select the package you just saved and export it to your hard disc. Run Business Intelligence Studio. Create new SSIS project (DON’T MISS THIS STEP). Add package from disc as existing item to project and open it. Move to Control Flow page do one of following: Remove all preparation SQL-tasks and connect Data Flow tasks. Modify all preparation SQL-tasks so the existence of tables is checked before table is created (yes, you have to do it manually). Add new Execute-SQL task as first task in control flow: Open task properties. Assign destination connection as connection to use. Insert the following SQL as command:   EXEC sp_MSForEachTable 'ALTER TABLE ? NOCHECK CONSTRAINT ALL' GO   EXEC sp_MSForEachTable 'DELETE FROM ?' GO   Save task. Add new Execute-SQL task as last task in control flow: Open task properties. Assign destination connection as connection to use. Insert the following SQL as command:   EXEC sp_MSForEachTable 'ALTER TABLE ? CHECK CONSTRAINT ALL' GO   Save task Now connect first Execute-SQL task with first Data Flow task and last Data Flow task with second Execute-SQL task. Now move to Package Explorer tab and change connections under Connection Managers folder. Make source connection to use database PLAIN. Make destination connection to use database CORRECT. Save package and rebuilt the project. Update package using SQL Server Management Studio. Some hints: Make sure you take the package from solution folder because it is saved there now. Don’t overwrite existing package. Use numeric suffix and let Management Studio to create a new version of package. Now you are done with your package. Run it to test it and clean out all the errors you find. TRUNCATE vs DELETE You can see that I used DELETE FROM instead of TRUNCATE. Why? Because TRUNCATE has some nasty limits (taken from MSDN): “You cannot use TRUNCATE TABLE on a table referenced by a FOREIGN KEY constraint; instead, use DELETE statement without a WHERE clause. Because TRUNCATE TABLE is not logged, it cannot activate a trigger. TRUNCATE TABLE may not be used on tables participating in an indexed view.” As I am not sure what tables you have and how they are used I provided here the solution that should work for all scenarios. If you need better performance then in some cases you can use TRUNCATE table instead of DELETE. Conclusion My conclusion is bitter this time although I am very positive guy. It is A.D. 2010 and still we have to write stupid hacks for simple things. Simple tools that existed before are long gone and we have to live mysterious bloatware that is our only choice when using default tools. If you take a look at the length of this posting and the count of steps I had to do for one easy thing you should treat it as a signal that something has went wrong in last years. Although I got my job done I would be still more happy if out of box tools are more intelligent one day. References T-SQL Trick for Deleting All Data in Your Database (Mauro Cardarelli) TRUNCATE TABLE (MSDN Library) Error Handling in SQL 2000 – a Background (Erland Sommarskog) Disable/Enable Foreign Key and Check constraints in SQL Server (Decipher)

    Read the article

  • Upgrading to Code Based Migrations EF 4.3.1 with Connector/Net 6.6

    - by GABMARTINEZ
    Entity Framework 4.3.1 includes a new feature called code first migrations.  We are adding support for this feature in our upcoming 6.6 release of Connector/Net.  In this walk-through we'll see the workflow of code-based migrations when you have an existing application and you would like to upgrade to this EF 4.3.1 version and use this approach, so you can keep track of the changes that you do to your database.   The first thing we need to do is add the new Entity Framework 4.3.1 package to our application. This should via the NuGet package manager.  You can read more about why EF is not part of the .NET framework here. Adding EF 4.3.1 to our existing application  Inside VS 2010 go to Tools -> Library Package Manager -> Package Manager Console, this will open the Power Shell Host Window where we can work with all the EF commands. In order to install this library to your existing application you should type Install-Package EntityFramework This will make some changes to your application. So Let's check them. In your .config file you'll see a  <configSections> which contains the version you have from EntityFramework and also was added the <entityFramework> section as shown below. This section is by default configured to use SQL Express which won't be necesary for this case. So you can comment it out or leave it empty. Also please make sure you're using the Connector/Net 6.6.x version which is the one that has this support as is shown in the previous image. At this point we face one issue; in order to be able to work with Migrations we need the __MigrationHistory table that we don't have yet since our Database was created with an older version. This table is used to keep track of the changes in our model. So we need to get it in our existing Database. Getting a Migration-History table into an existing database First thing we need to do to enable migrations in our existing application is to create our configuration class which will set up the MySqlClient Provider as our SQL Generator. So we have to add it with the following code: using System.Data.Entity.Migrations;     //add this at the top of your cs file public class Configuration : DbMigrationsConfiguration<NameOfYourDbContext>  //Make sure to use the name of your existing DBContext { public Configuration() { this.AutomaticMigrationsEnabled = false; //Set Automatic migrations to false since we'll be applying the migrations manually for this case. SetSqlGenerator("MySql.Data.MySqlClient", new MySql.Data.Entity.MySqlMigrationSqlGenerator());     }   }  This code will set up our configuration that we'll be using when executing all the migrations for our application. Once we have done this we can Build our application so we can check that everything is fine. Creating our Initial Migration Now let's add our Initial Migration. In Package Manager Console, execute "add-migration InitialCreate", you can use any other name but I like to set this as our initial create for future reference. After we run this command, some changes were done in our application: A new Migrations Folder was created. A new class migration call InitialCreate which in most of the cases should have empty Up and Down methods as long as your database is up to date with your Model. Since all your entities already exists, delete all duplicated code to create any entity which exists already in your Database if there is any. I found this easier when you don't have any pending updates to do to your database. Now we have our empty migration that will make no changes in our database and represents how are all the things at the begining of our migrations.  Finally, let's create our MigrationsHistory table. Optionally you can add SQL code to delete the edmdata table which is not needed anymore. public override void Up() { // Just make sure that you used 4.1 or later version         Sql("DROP TABLE EdmMetadata"); } From our Package Manager Console let's type: Update-database; If you like to see the operations made on each Update-database command you can use the flag -verbose after the Update-database. This will make two important changes.  It will execute the Up method in the initial migration which has no changes in the database. And second, and very important,  it will create the __MigrationHistory table necessary to keep track of your changes. And next time you make a change to your database it will compare the current model to the one stored in the Model Column of this table. Conclusion The important thing of this walk through is that we must create our initial migration before we start doing any changes to our model. This way we'll be adding the necessary __MigrationsHistory table to our existing database, so we can keep our database up to date with all the changes we do in our context model using migrations. Hope you have found this information useful. Please let us know if you have any questions or comments, also please check our forums here where we keep answering questions in general for the community.  Happy MySQL/Net Coding!

    Read the article

  • Ubuntu Server 12.04 as a router. Problem with DNS?? Or Routing table?

    - by Lorenzo
    I have a virtualbox lab made up of 4 Windows 2008 R2 servers (DC/DNS,SQL,SHAREPOINT, EXCHANGE) that are configured with static ip addresses with NIC's attached to Internal network. Everything works. I had the requirement to execute some tests that also access external services available on the internet. To keep things clean and similar to the production environment I have installed another VM, with Ubuntu Server 12.04 64 bit and configured (I hope) to work as a router like described on this post. This VM has two network interfaces: first is Bridged with the host and is used as a WAN connection and the other one attached in the Internal Network with its own static IP address on the internal network subnet. But actually the Windows servers does not connect to the internet while the unix one connects. I did a route command. this is the result: Kernel IP Routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 10.69.121.1 0.0.0.0 UG 100 0 0 eth0 10.69.121.0 * 255.255.255.0 U 0 0 0 eth0 192.168.83.0 * 255.255.255.0 U 0 0 0 eth1 Can somebody help me with this configuration? :) Thanks! Addendum: I forgot to mention that one of the windows server hosts a DNS service for which I should maybe configure a forwarding server but I do not exactly know which server to forward on... :(

    Read the article

  • C# how to get current encoding type used by C# to write/read configuration for config file?

    - by 5YrsLaterDBA
    I am doing connection string encryption. we use our own encryption key with AES algorithm to do this. during the process, we need to convert string to byte array and then convert byte array back to string. I found the encoding play an important role on those conversions. So I need to know the encoding C# is using to get above conversion right. Any idea how to get current encoding programmably? thanks,

    Read the article

  • Sqlite catalog and schema settings (for MyBatis generator)

    - by user1769754
    I have been trying to use the MyBatis generator on a Sqlite database but can't seem to find what settings to use for the XML attributes catalog and schema. The generator errors with "Generation Warnings Occured: Table configuration with catalog main, schema sqlite_master, and table testTable did not resolve to any tables" I can't find much from the sqlite website other than this which gives something like catalog = main, schema = sqlite or sqlite_master My generator XML file is <?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE generatorConfiguration PUBLIC "-//mybatis.org//DTD MyBatis Generator Configuration 1.0//EN" "http://mybatis.org/dtd/mybatis-generator-config_1_0.dtd" > <generatorConfiguration > <context id="context"> <jdbcConnection driverClass="org.sqlite.JDBC" connectionURL="jdbc:sqlite:testDB.sqlite" userId="" password="" ></jdbcConnection> <javaModelGenerator targetPackage="model" targetProject="test/src" ></javaModelGenerator> <sqlMapGenerator targetPackage="model" targetProject="test/src" ></sqlMapGenerator> <javaClientGenerator targetPackage="model" targetProject="test" type="XMLMAPPER" ></javaClientGenerator> <table catalog="main" schema="sqlite_master" tableName="testTable" > <property name="useActualColumnNames" value="true"/> </table> </context> </generatorConfiguration> I have also tried other combinations like <table catalog="main" schema="sqlite" tableName="testTable" > <table schema="sqlite" tableName="testTable" > <table schema="sqlite_master" tableName="testTable" > <table schema="main.testTable" tableName="testTable" > Anyone here know the right settings?

    Read the article

  • how to read user input from custom dialog in android?

    - by urobo
    I'd like to use a custom dialog built over an AlterDialog to obtain login info from the user. In this manner I first use the layoutinflater to get the layout and then put it in the AlertDialog.Builder.setView() method. LayoutInflater inflater = (LayoutInflater) Home.this.getSystemService(LAYOUT_INFLATER_SERVICE); layoutLogin = inflater.inflate(R.layout.login,(ViewGroup) findViewById(R.id.rl)); My layout consists of two textview and two editext for username and password respectively. Then I override the onCreateDialog method, checking the dialog id and putting all together, during the building phase I use the setButton(...) method to add a confirmation Button, neutral though: /* (non-Javadoc) * @see android.app.Activity#onCreateDialog(int) */ @Override protected Dialog onCreateDialog(int id) { AlertDialog d = null; AlertDialog.Builder builder = new AlertDialog.Builder(this); switch(id){ ... case Home.DIALOG_LOGIN: builder.setView(layoutLogin); builder.setMessage("Sign in to your DyCaPo Account").setCancelable(false); d=builder.create(); d.setTitle("Login"); Message msg = new Message(); msg.setTarget(Home.this.handleLogin); d.setButton(Dialog.BUTTON_NEUTRAL,"Sign in",msg); break; ... } return d; } Then I setup the Handler handleLogin: private Handler handleLogin= new Handler(){ /* (non-Javadoc) * @see android.os.Handler#handleMessage(android.os.Message) */ @Override public void handleMessage(Message msg) { String input = usernameInput.getText().toString(); //this should hold the EditText field for the username } }; which is just a stub up to now. what I don't get is when and where I have to access the two fields since I tried to save a reference to them but unfortunately I always get a null pointer exception. Can anyone tell me what I do wrong and give some guidelines to work with custom dialogs. Thanks in advance! :)

    Read the article

  • How to list column headers of a SQL Server table using sp_help perhaps?

    - by Hamish Grubijan
    Hi, I have a few tables with 70-80 columns in them. I would like to populate them with somewhat random data, unless I will not be able to do so due to key violation, etc. The first step would be simply to get the list of all headers. There seem to be two ways: A) Run select * from table_of_interest; in MSFT SQL Server Management Studio 2008. Now, right-click the result and click "Copy With headers". However, I get zero rows back, and when I try to copy nothing + headers, I get: TITLE: Microsoft SQL Server Management Studio ------------------------------ Value cannot be null. Parameter name: data (System.Windows.Forms) ------------------------------ BUTTONS: OK ------------------------------ This looks like a bug ... anyhow ... there is another way. B) I can run sp_help table_of_interest;. However, I end up getting too much back. I get 7 different tables back but I am only interested in the second one. The columns of the second table are: Column_name | Type | Computed | Length | Prec | Scale | Nullable | TrimTrailingBlanks | FixedLenNullInSource | Collation I might be interested in just a Column_name and Type, but maybe other columns. So ... since sp_help probably runs a bunch of queries ... how do I get under the hood? How can I run the second query AND filter down the number of columns that I am interested in? Many Thanks!

    Read the article

  • IOException reading a large file from a UNC path into a byte array using .NET

    - by Matt
    I am using the following code to attempt to read a large file (280Mb) into a byte array from a UNC path public void ReadWholeArray(string fileName, byte[] data) { int offset = 0; int remaining = data.Length; log.Debug("ReadWholeArray"); FileStream stream = new FileStream(fileName, FileMode.Open, FileAccess.Read); while (remaining > 0) { int read = stream.Read(data, offset, remaining); if (read <= 0) throw new EndOfStreamException (String.Format("End of stream reached with {0} bytes left to read", remaining)); remaining -= read; offset += read; } } This is blowing up with the following error. System.IO.IOException: Insufficient system resources exist to complete the requested If I run this using a local path it works fine, in my test case the UNC path is actually pointing to the local box. Any thoughts what is going on here ?

    Read the article

  • Windows "known folders": is there any one of them which is reliably read/write for all users on all

    - by Stabledog
    SHGetKnownFolderPath() and its cohorts accept one of the constants defined here (http://msdn.microsoft.com/en-us/library/dd378457%28v=VS.85%29.aspx ), returning the path to a directory. I'm looking for one of these folders which is reliably writable by all users (including LocalSystem) on XP, Vista, and Windows 7... but I think I'm striking out. It appears that, in fact, there is no single location on the hard drive anymore where you can put a file and be assured that all users can write to it on all these OS versions, without fiddling the permissions first. Is this true?

    Read the article

  • How do you read a segfault kernel log message.

    - by Sullenx
    This can be a very simple question, I'm am attempting to debug an application which generates the following segfault error in the kern.log /var/log/kern.log.0:Jan 8 13:25:56 myhost kernel: myapp[15514]: segfault at 794ef0 ip 080513b sp 794ef0 error 6 in myapp[8048000+24000] Here are my questions: 1) Is there any documentation as to what are the diff error numbers on segfault, in this instance it is error 6, but i've seen error 4, 5 2) What is the meaning of the information at bf794ef0 ip 0805130b sp bf794ef0 and myapp[8048000+24000]? So far i was able to compile with symbols, and when i do a "x 0x8048000+24000" it returns a symbol, is that the correct way of doing it? My assumptions thus far are the following: sp = stack pointer? ip = instruction pointer at = ???? myapp[8048000+24000] = address of symbol?

    Read the article

  • Fluent NHibernate is bringing ClassMap Id and SubClassMap Id to referenced table?

    - by Andy
    HI, I have the following entities I'm trying to map: public class Product { public int ProductId { get; private set; } public string Name { get; set; } } public class SpecialProduct : Product { public ICollection<Option> Options { get; private set; } } public class Option { public int OptionId { get; private set; } } And the following mappings: public class ProductMap : ClassMap<Product> { public ProductMap() { Id( x => x.ProductId ); Map( x => x.Name ); } public class SpecialProductMap : SubclassMap<SpecialProduct> { public SpecialProductMap() { Extends<ProductMap>(); HasMany( p => p.Options ).AsSet().Cascade.SaveUpdate(); } } public class OptionMap : ClassMap<Option> { public OptionMap() { Id( x => x.OptionId ); } } The problem is that my tables end up like this: Product -------- ProductId Name SpecialProduct -------------- ProductId Option ------------ OptionId ProductId // This is wrong SpecialProductId // This is wrong There should only be the one ProductId and single reference to the SpecialProduct table, but we get "both" Ids and two references to SpecialProduct.ProductId. Any ideas? Thanks Andy

    Read the article

< Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >