Search Results

Search found 24675 results on 987 pages for 'table'.

Page 684/987 | < Previous Page | 680 681 682 683 684 685 686 687 688 689 690 691  | Next Page >

  • Mac OS X 10.8 VPN Server: Bypass VPN for LAN traffic (routing LAN traffic to secondary connection)

    - by Dan Robson
    I have somewhat of an odd setup for a VPN server with OS X Mountain Lion. It's essentially being used as a bridge to bypass my company's firewall to our extranet connection - certain things our team needs to do require unfettered access to the outside, and changing IT policies to allow traffic through the main firewall is just not an option. The extranet connection is provided through a Wireless-N router (let's call it Wi-Fi X). My Mac Mini server is configured with the connection to this router as the primary connection, thus unfettered access to the internet via the router. Connections to this device on the immediate subnet are possible through the LAN port, but outside the subnet things are less reliable. I was able to configure the VPN server to provide IP addresses to clients in the 192.168.11.150-192.168.11.200 range using both PPTP and L2TP, and I'm able to connect to the extranet through the VPN using the standard Mac OS X VPN client in System Preferences, however unsurprisingly, a local address (let's call it internal.company.com) returns nothing. I tried to bypass the limitation of the VPN Server by setting up Routes in the VPN settings. Our company uses 13.x.x.x for all internal traffic, instead of 10.x.x.x, so the routing table looked something like this: IP Address ---------- Subnet Mask ---------- Configuration 0.0.0.0 248.0.0.0 Private 8.0.0.0 252.0.0.0 Private 12.0.0.0 255.0.0.0 Private 13.0.0.0 255.0.0.0 Public 14.0.0.0 254.0.0.0 Private 16.0.0.0 240.0.0.0 Private 32.0.0.0 224.0.0.0 Private 64.0.0.0 192.0.0.0 Private 128.0.0.0 128.0.0.0 Private I was under the impression that if nothing was entered here, all traffic was routed through the VPN. With something entered, only traffic specifically marked to go through the VPN would go through the VPN, and all other traffic would be up to the client to access using its own default connection. This is why I had to specifically mark every subnet except 13.x.x.x as Private. My suspicion is that since I can't reach the VPN server from outside the local subnet, it's not making a connection to the main DNS server and thus can't be reached on the larger network. I'm thinking that entering hostnames like internal.company.com aren't kicked back to the client to resolve, because the server has no idea that the IP address falls in the public range, since I suspect (probably should ping test it but don't have access to it right now) that it can't reach the DNS server to find out anything about that hostname. It seems to me that all my options for resolving this all boil down to the same type of solution: Figure out how to reach the DNS with the secondary connection on the server. I'm thinking that if I'm able to do [something] to get my server to recognize that it should also check my local gateway (let's say Server IP == 13.100.100.50 and Gateway IP == 13.100.100.1). From there Gateway IP can tell me to go find DNS Server at 13.1.1.1 and give me information about my internal network. I'm very confused about this path -- really not sure if I'm even making sense. I thought about trying to do this client side, but that doesn't make sense either, since that would add time to each and every client side setup. Plus, it just seems more logical to solve it on the server - I could either get rid of my routing table altogether or keep it - I think the only difference would be that internal traffic would also go through the server - probably an unnecessary burden on it. Any help out there? Or am I in over my head? Forward proxy or transparent proxy is also an option for me, although I have no idea how to set either of those up. (I know, Google is my friend.)

    Read the article

  • How to recover data files from xampp-windows to xampp-linux after crash?

    - by David Buehler
    My Windows box died after I developed a database in xampp on it; fortunately I have a backup of the entire F:/TestWeb/Xampp partition. Unfortunately, I did not do an Export (nor dump) of the "Lws2" database before the crash. I have replaced the defunct machine with one running Mint7 (based on Ubuntu 9.04 "Jaunty Jackalope") and installed xampp-linux into the /opt partition, so the new xampp now runs fine in /opt/lampp, and says all the elements are secured by passwords (which I just assigned during this installation.) I assumed that Xamp-Windows installed in November would migrate easily to xampp-linux installed iin February -- a bad assumption. It apparently would have been simple if I had known enough to do an Export or a Dump before the crash, but.... The backup was done to a Network Attached Storage drive, which is formatted as "vfat" so the backup does not carry with it any valid ownership permissions from MySql on NTFS. I now see from my backup that the old data resided in \TestWeb\Xampp\Mysql\Data\Lws2\ and consists of 7 ".frm" files which define my tables. The actual data -- I suppose a ".sql" file or files -- has disappeared, and I am resigning myself to two days of retyping it. But I do not wish to do the table layouts all over again. So I copied Data tree to /opt/lampp/Data -- PhpMyAdmin does not see it. So I copied Lws2 tree to /opt/lampp/Lws2 -- PhpMyAdmin does not see it. So I copied Data tree to /opt/lampp/var/mysql/Data -- PhpMyAdmin does not see it. So I copied Lws2 tree to /opt/lampp/var/mysql/Lws2 -- PhpMyAdmin does not see it. So I adjusted all the permissions to stop saying owner "nobody" to owner "root" and gave full permissions to all groups and to all others, with permissions percolating down, in all 4 trees. You guessed it -- PhpMyAdmin does not see any database named Lws2, only its 4 default ones. I double-checked the permissions and rebooted Linux and repeated the tests. At some point in that process I did see PhpMyAdmin showing "lws2(7)" but when I clicked on it I saw a "no table found" message. I have not been able to recreate that experience. Apparently there are some setup files for MySql and for PhpMyAdmin which need to be set up by running a wizard or two or by editing the files directly. I grepped the TestWeb tree and found an old "ldir = "C:TestWeb\Xampp\MySql\" and a "DataDir = C:TestWeb\Xampp\MySql\" in a .php file and in a .bat file, but I cannot find the corresponding config file names on the /opt partition/ -- so it looks as if these wizards have not been run to create them. What config files files does Linux use to setup MySql config files for PhpMyAdmin? What wizards do I need to run to point the MySql engine and the PhpMyAdmin at the folder /opt/lampp/data/ with its lws2 folder inside it? Or which files do I need to edit, with a sample of what it normally says under Linux? Incidentally, I remember I converted from MyISAM with its .MYD and .MYI files to InnoDB after entering only a small amount of the data -- and I do not know what file types to look for -- perhaps my data is still there but under another guise or in another place? Is it something as simple as linux needing to see "/data/" instead of /Data? I will check that out while waiting for a response. If anyone can point me to documentation that discusses this level of detail -- I will read it avidly! In any case, thanks for any clarification you can give on this thorny problem. wizdum

    Read the article

  • Server currently under DDOS, not sure what to do

    - by Volex
    My web server is currently under a DDOS attack I believe, the messages log is full of these kind of messages: May 13 15:51:19 kernel: nf_conntrack: table full, dropping packet. May 13 15:51:19 last message repeated 9 times May 13 15:51:24 kernel: __ratelimit: 78 callbacks suppressed May 13 15:51:24 kernel: nf_conntrack: table full, dropping packet. May 13 15:52:06 kernel: possible SYN flooding on port 80. Sending cookies. and a netstat has a huge amount of the following: tcp 0 0 my.host.com:http bb176da0.virtua.com.br:4998 SYN_RECV tcp 0 0 my.host.com:http 187.0.43.109:2694 SYN_RECV tcp 0 0 my.host.com:http 109.229.4.145:1722 SYN_RECV tcp 0 0 my.host.com:http 189-84-163-244.sodobr:63267 SYN_RECV tcp 0 0 my.host.com:http bd66839d.virtua.com.br:3469 SYN_RECV tcp 0 0 my.host.com:http 69.101.56.190.dsl.int:52552 SYN_RECV tcp 0 0 my.host.com:http pc-62-230-47-190.cm.vt:2262 SYN_RECV tcp 0 0 my.host.com:http 189-84-163-244.sodobr:63418 SYN_RECV tcp 0 0 my.host.com:http pc-62-230-47-190.cm.vt:1741 SYN_RECV tcp 0 0 my.host.com:http zaq3d739320.zaq.ne.jp:2141 SYN_RECV tcp 0 0 my.host.com:http netacc-gpn-4-80-73.po:52676 SYN_RECV tcpdump shows: 7:11:08.564510 IP 187-4-1xx-4.xxx.ipd.brasiltelecom.net.br.54821 > my.host.com.http: S 999692166:999692166(0) win 65535 <mss 1452,nop,nop,sackOK> 17:11:08.566347 IP 114-44-171-67.dynamic.hinet.net.1129 > my.host.com.http: S 605369055:605369055(0) win 65535 <mss 1440,nop,nop,sackOK> 17:11:08.570210 IP 200-101-13-130.pvoce300.ipd.brasiltelecom.net.br.5590 > my.host.com.http: S 2813379182:2813379182(0) win 16384 <mss 1460,nop,nop,sackOK> 17:11:08.571290 IP dsl-189-143-30-99-dyn.prod-infinitum.com.mx.1615 > my.host.com.http: S 281542700:281542700(0) win 65535 <mss 1452,nop,nop,sackOK> 17:11:08.583847 IP dsl-189-143-30-99-dyn.prod-infinitum.com.mx.1617 > my.host.com.http: S 499413892:499413892(0) win 65535 <mss 1452,nop,nop,sackOK> 17:11:08.588680 IP 170.51.229.112.2569 > my.host.com.http: S 2195084898:2195084898(0) win 65535 <mss 1460,nop,nop,sackOK> 17:11:08.588773 IP gw2-1.211.ru.3180 > my.host.com.http: F 2315901786:2315901786(0) ack 2620913033 win 64240 17:11:08.590656 IP 200-101-13-130.pvoce300.ipd.brasiltelecom.net.br.5614 > my.host.com.http: S 2813715032:2813715032(0) win 16384 <mss 1460,nop,nop,sackOK> 17:11:08.591212 IP 203.82.82.54.15848 > my.host.com.http: S 4070423507:4070423507(0) win 16384 <mss 1400,nop,nop,sackOK> 17:11:08.591254 IP 203.82.82.54.2545 > my.host.com.http: S 1790910784:1790910784(0) win 16384 <mss 1400,nop,nop,sackOK> 17:11:08.591289 IP 203.82.82.54.28306 > my.host.com.http: S 578615626:578615626(0) win 16384 <mss 1400,nop,nop,sackOK> 17:11:08.591591 IP gw2-1.211.ru.3191 > my.host.com.http: F 2316435991:2316435991(0) ack 2634205972 win 64240 17:11:08.591790 IP 200-101-13-130.pvoce300.ipd.brasiltelecom.net.br.5593 > my.host.com.http: S 2813659017:2813659017(0) win 16384 <mss 1460,nop,nop,sackOK> 17:11:08.593691 IP gw2-1.211.ru.3203 > my.host.com.http: F 2316834420:2316834420(0) ack 2629074987 win 64240 I'm not sure what I can do to limit/mitigate this, currently no webpages are being served, any help gratefully appreciated.

    Read the article

  • Transparent proxy which preserves client mac address

    - by A G
    I have a customer that wants to intercept SSL traffic as it leaves their network. My proposed solution is to setup a proxy that is transparent and both layer 2 and layer 3 so it can simply be dropped into their network without any change in config required. The proxy has two NICs, one connected to the server, the other to the client. The client, proxy and gateway are under control of the customer, the server is not. For example: client --- Proxy --- gateway -|- server I have my proxy program configured with IP_TRANSPARENT socket option to it can respond to connections destined for a remote IP. I am using the following setup: iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY --on-port 3128 --tproxy-mark 1/1 iptables -t mangle -A PREROUTING -p tcp -j MARK --set-mark 1 ip rule add fwmark 1/1 table 1 ip route add local 0.0.0.0/0 dev lo table 1 The client in question is on its own subnet and has been configured so that the proxy is the default gateway. The result is: Client sends a frame to the proxy; source IP is client, source mac is client, destination IP is server, destination mac is proxy Proxy forwards this frame to the gateway; source IP is proxy, source mac is proxy, destination IP is server, destination mac is gateway Gateway forwards this to the server and gets a response back. Gateway sends reply back to proxy; source IP is server, source mac is gateway, destination IP is proxy, destination mac is proxy Proxy forwards this reply to client; source IP is server, source mac is proxy, destination IP is client, destination mac is client. The tproxy and iptables configuration lets the proxy send packets with a non local ip address. Is there a way to make something transparent at the mac address level? That is, put the client on the same subnet as the gateway. The gateway sees the source IP and mac as that of the client, even though they originated from the proxy. Could this be done by configuring the proxy as a bridge then use ebtables to escalate the traffic to be handled by iptables? When I use ebtables to push something up to iptables, it appears my proxy program doesn't respond to the packets as they are destined for the gateways's mac address, not the proxy's. What are some other potential avenues I could investigate? EDIT: When the client and gateway are on different subnets (and client has set the proxy as the gateway), it works as described in 1 to 5. But I want to know if it is possible to have the client and gateway on the same subnet and have the proxy fully transparent (ie client is not aware of the proxy). Thanks! EDIT 2: I can configure the proxy as a bridge using brctl, but cannot find a way to direct this traffic to my proxy program - asked here Possible for linux bridge to intercept traffic?. Currently, with the description numbered 1 to 5, it operates at layer 3; it is transparent on the client side (client thinks it is talking to the server's IP), but not on the gateway side (gateway is talking to the proxy's IP). What I want to find out is, is it possible to make this operate at layer 2, so it is fully transparent? What are the available options I should research? Thanks

    Read the article

  • OpenVPN on Tomato and Vista - can't see my network

    - by Ian
    I followed the instructions here (http://todayguesswhat.blogspot.ca/2011/03/quick-simple-vpn-setup-guide-using.html) to set up a TCP connection to OpenVPN on my Tomato router. Used TCP because the place I usually surf at seems to have the other ports blocked. My Vista laptop is able to connect to the router but I don't appear to be getting an IP address. I'm able to access my router's admin page, but I can't see the network at home. When I browse to Whatsmyip I see my home IP. Here are the results of route print -4 when I'm just connect to the library and when I've fired up the VP connection as well: Library only: =========================================================================== Interface List 22 ...00 ff c4 a0 e7 5c ...... TAP-Win32 Adapter V9 15 ...00 23 4e 20 b3 64 ...... Atheros AR9281 Wireless Network Adapter 10 ...00 23 8b 39 ec 71 ...... Marvell Yukon 88E8040T PCI-E Fast Ethernet Controller 1 ........................... Software Loopback Interface 1 11 ...00 00 00 00 00 00 00 e0 isatap.{834A8A0A-5E2C-47D0-9673-7965DE8B5470} 14 ...02 00 54 55 4e 01 ...... Teredo Tunneling Pseudo-Interface 17 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #3 20 ...00 00 00 00 00 00 00 e0 6TO4 Adapter 18 ...00 00 00 00 00 00 00 e0 6TO4 Adapter 19 ...00 00 00 00 00 00 00 e0 6TO4 Adapter 23 ...00 00 00 00 00 00 00 e0 isatap.{C4A0E75C-765E-4F7D-A55C-77945779816A} 34 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #5 =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.1.29.1 10.1.29.117 25 10.1.29.0 255.255.255.0 On-link 10.1.29.117 281 10.1.29.117 255.255.255.255 On-link 10.1.29.117 281 10.1.29.255 255.255.255.255 On-link 10.1.29.117 281 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 10.1.29.117 281 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 10.1.29.117 281 =========================================================================== Library and TCP OpenVPN: IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.1.29.1 10.1.29.117 25 0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.116 30 0.0.0.0 128.0.0.0 192.168.1.1 192.168.1.116 30 10.1.29.0 255.255.255.0 On-link 10.1.29.117 281 10.1.29.117 255.255.255.255 On-link 10.1.29.117 281 10.1.29.255 255.255.255.255 On-link 10.1.29.117 281 24.212.205.68 255.255.255.255 10.1.29.1 10.1.29.117 25 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 128.0.0.0 128.0.0.0 192.168.1.1 192.168.1.116 30 192.168.1.0 255.255.255.0 On-link 192.168.1.116 286 192.168.1.116 255.255.255.255 On-link 192.168.1.116 286 192.168.1.255 255.255.255.255 On-link 192.168.1.116 286 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 192.168.1.116 286 224.0.0.0 240.0.0.0 On-link 10.1.29.117 281 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 192.168.1.116 286 255.255.255.255 255.255.255.255 On-link 10.1.29.117 281 =========================================================================== Thanks for any advice. I looked at one of the answers but I'm not sure if it applied to me as it said that 10...* was the vpn connection, but I appear to have 10...* when I connect just to the library.

    Read the article

  • e2fsck / resize2fs problems

    - by BlakBat
    I've got 6 drives (each 1.5T, all same model and firmware revision) that are part of a RAID5 array. The RAID5 makes a LVM volume group and a logical group. The latter contains only one ext3 partition. I've recently ran: e2fsck -f /dev/vg03/lv01 && resize2fs -M /dev/vg03/lv01 which exited without an error. Now when I try to mount /dev/vg03/lv01 I get: EXT3-fs error (device dm-0): ext3_check_descriptors: Block bitmap for group 30533 not in group (block 1000532368)! EXT3-fs: group descriptors corrupted! How do I get out of this predicament? This is all the info I can currently give you: fdisk -l /dev/sd[cdefgh] shows (correctly) that they are "Linux raid autodetect" but fdisk now shows: fdisk -l /dev/md0 Disk /dev/md0: 7501.5 GB, 7501495664640 bytes ... Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table (instead of a LVM type partition) fdisk -l /dev/vg03/lv01 Disk /dev/vg03/lv01: 7501.5 GB, 7501491732480 bytes ... Disk identifier: 0x00000000 Disk /dev/vg03/lv01 doesn't contain a valid partition table (instead of a ext3 type partition) I've tried: e2fsck -fy /dev/vg03/lv01 e2fsck 1.41.12 (17-May-2010) e2fsck: Group descriptors look bad... trying backup blocks... Block bitmap for group 30533 is not in group. (block 1000532368) Relocate? yes Inode bitmap for group 30533 is not in group. (block 1000532369) Relocate? yes Pass 1: Checking inodes, blocks, and sizes Relocating group 30533's block bitmap to 1000524246... Error allocating 1 contiguous block(s) in block group 30533 for inode bitmap: Could not allocate block in ext2 filesystem e2fsck: aborted Extra information I can give you: cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active (auto-read-only) raid5 sdg1[0] sdh1[5] sdf1[4] sde1[3] sdc1[2] sdd1[1] 7325679360 blocks level 5, 128k chunk, algorithm 2 [6/6] [UUUUUU] bitmap: 1/175 pages [4KB], 4096KB chunk unused devices: Lastly, all smartctl tests (short and extendend) showed no errors on any of the disks. Should I try to resize2fs to grow /dev/vg03/lv01 and redo a e2fsck ? Should I cfdisk /dev/md0 and /dev/vg03/lv01 back to their real types? Thanks in advance for all and any help. 2011-09-20 UPDATE I issued the following commands and was able to remount the partition, but by viewing the size (df) of before and after, it seems that 1Tb of data have gone missing. By checking the MD5SUMS (from an old backup) of some files with the "same" files from the remounted partition, some errors have been detected. Commands issued to remount the partition were: dumpe2fs /dev/vg03/lv01 Block count: 1000491435<br /> Block size: 4096<br /> tune2fs -O ^has_journal /dev/vg03/lv01 resize2fs -p /dev/vg03/lv01 dumpe2fs /dev/vg03/lv01 Block count: 1831418880<br /> Block size: 4096<br /> mount -o ro,noatime /dev/vg03/lv01 /mnt/raid OK... but files have been damaged / gone missing.

    Read the article

  • mysqld service crashes on restart, after importing mysqldump #innodb

    - by ubunut
    I have 2 mysql servers. Let's call them server01 & server02. Both have the same configuration: mysqladmin Ver 8.42 Distrib 5.1.61, for redhat-linux-gnu on x86_64 [client] default-character-set=utf8 [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 max_allowed_packet = 16M default-character-set=utf8 default-collation=utf8_unicode_ci character-set-server=utf8 collation-server=utf8_unicode_ci default-storage-engine = InnoDB innodb_data_home_dir = /var/lib/mysql innodb_log_group_home_dir = /var/lib/mysql innodb_data_file_path = ibdata1:10M:autoextend innodb_additional_mem_pool_size = 2M innodb_log_file_size = 5M innodb_log_buffer_size = 8M innodb_lock_wait_timeout = 50 innodb_flush_log_at_trx_commit = 1 innodb_buffer_pool_size = 700M table_cache = 300 thread_cache_size = 4 query_cache_size = 200m query_cache_limit = 10m [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid I make a mysqldump on server01: mysqldump -uuser -ppassword --all-databases testservers.sql (most tables in these databases are innodb, some of the mysql.* tables are Innodb too) Then I import the testservers.sql on server02: mysql -uuser < testservers.sql (mysqld has been started with --skip-network). So far so good, I can login into mysql & everything seems to be ok. BUT when I exit to the shell and execute service mysqld restart, The service fails to start. stack-trace in /var/log/mysqld.log: 121022 14:53:19 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 121022 14:53:19 [Warning] '--default-character-set' is deprecated and will be removed in a future release. Please use '--character-set-server' instead. 121022 14:53:19 [Warning] '--default-collation' is deprecated and will be removed in a future release. Please use '--collation-server' instead. 12:53:19 UTC - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=8384512 read_buffer_size=131072 max_used_connections=0 max_threads=151 thread_count=0 connection_count=0 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 338324 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x267e630 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7fff3efe0be0 thread_stack 0x40000 /usr/libexec/mysqld(my_print_stacktrace+0x29) [0x84bd89] /usr/libexec/mysqld(handle_fatal_signal+0x483) [0x6a0be3] /lib64/libpthread.so.0() [0x338d60f500] /usr/libexec/mysqld(ha_resolve_by_name(THD*, st_mysql_lex_string const*)+0x81) [0x6956e1] /usr/libexec/mysqld(open_table_def(THD*, st_table_share*, unsigned int)+0xe0a) [0x60e5ba] /usr/libexec/mysqld(get_table_share(THD*, TABLE_LIST*, char*, unsigned int, unsigned int, int*)+0x20b) [0x602b0b] /usr/libexec/mysqld() [0x603597] /usr/libexec/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x7a1) [0x6079a1] /usr/libexec/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5d0) [0x608570] /usr/libexec/mysqld(open_and_lock_tables_derived(THD*, TABLE_LIST*, bool)+0x6a) [0x60877a] /usr/libexec/mysqld(plugin_init(int*, char**, int)+0x622) [0x715af2] /usr/libexec/mysqld() [0x5bd3b2] /usr/libexec/mysqld(main+0x1b3) [0x5bfc93] /lib64/libc.so.6(__libc_start_main+0xfd) [0x338d21ecdd] /usr/libexec/mysqld() [0x5087b9] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (0): is an invalid pointer Connection ID (thread ID): 0 Status: NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 121022 14:53:19 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended A typical mysqdump entry looks like this: DROP TABLE IF EXISTS `adodb_logsql`; /*!40101 SET @saved_cs_client = @@character_set_client */; /*!40101 SET character_set_client = utf8 */; CREATE TABLE `adodb_logsql` ( `id` bigint(10) unsigned NOT NULL AUTO_INCREMENT, `created` datetime NOT NULL, `sql0` varchar(250) NOT NULL DEFAULT '', `sql1` text, `params` text, `tracer` text, `timer` decimal(16,6) NOT NULL DEFAULT '0.000000', PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='to save some logs from ADOdb'; /*!40101 SET character_set_client = @saved_cs_client */; IF I change all occurrences of "ENGINE=InnoDB" to "ENGINE=MyISAM" before import, then the service has no problem restarting. I'm quite puzzled as to what's happening, maybe I'm just an idiot, then by all means tell me so. Any help would be greatly appreciated!

    Read the article

  • iptables rule(s) to send openvpn traffic from clients over an sshuttle tunnel?

    - by Sam Martin
    I have an Ubuntu 12.04 box with OpenVPN. The VPN is working as expected -- clients can connect, browse the Web, etc. The OpenVPN server IP is 10.8.0.1 on tun0. On that same box, I can use sshuttle to tunnel into another network to access a Web server on 10.10.0.9. sshuttle does its magic using the following iptables commands: iptables -t nat -N sshuttle-12300 iptables -t nat -F sshuttle-12300 iptables -t nat -I OUTPUT 1 -j sshuttle-12300 iptables -t nat -I PREROUTING 1 -j sshuttle-12300 iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 10.10.0.0/24 -p tcp --to-ports 12300 -m ttl ! --ttl 42 iptables -t nat -A sshuttle-12300 -j RETURN --dest 127.0.0.0/8 -p tcp Is it possible to forward traffic from OpenVPN clients over the sshuttle tunnel to the remote Web server? I'd ultimately like to be able to set up any complicated tunneling on the server, and have relatively "dumb" clients (iPad, etc.) be able to access the remote servers via OpenVPN. Below is a basic diagram of the scenario: [Edit: added output from the OpenVPN box] $ sudo iptables -nL -v -t nat Chain PREROUTING (policy ACCEPT 1498 packets, 252K bytes) pkts bytes target prot opt in out source destination 1512 253K sshuttle-12300 all -- * * 0.0.0.0/0 0.0.0.0/0 Chain INPUT (policy ACCEPT 322 packets, 58984 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 584 packets, 43241 bytes) pkts bytes target prot opt in out source destination 587 43421 sshuttle-12300 all -- * * 0.0.0.0/0 0.0.0.0/0 Chain POSTROUTING (policy ACCEPT 589 packets, 43595 bytes) pkts bytes target prot opt in out source destination 1175 76298 MASQUERADE all -- * eth0 10.8.0.0/24 0.0.0.0/0 Chain sshuttle-12300 (2 references) pkts bytes target prot opt in out source destination 17 1076 REDIRECT tcp -- * * 0.0.0.0/0 10.10.0.0/24 TTL match TTL != 42 redir ports 12300 0 0 RETURN tcp -- * * 0.0.0.0/0 127.0.0.0/8 $ sudo iptables -nL -v -t filter Chain INPUT (policy ACCEPT 97493 packets, 30M bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 131K 109M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 1370 89160 ACCEPT all -- * * 10.8.0.0/24 0.0.0.0/0 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable [Edit 2: more OpenVPN server output] $ netstat -r Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface default 192.168.1.1 0.0.0.0 UG 0 0 0 eth0 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 10.8.0.2 * 255.255.255.255 UH 0 0 0 tun0 192.168.1.0 * 255.255.255.0 U 0 0 0 eth0 [Edit 3: still more debug output] IP forwarding appears to be enabled correctly on the OpenVPN server: # find /proc/sys/net/ipv4/conf/ -name forwarding -ls -execdir cat {} \; 18926 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/all/forwarding 1 18954 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/default/forwarding 1 18978 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/eth0/forwarding 1 19003 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/lo/forwarding 1 19028 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/tun0/forwarding 1 Client routing table: $ netstat -r Routing tables Internet: Destination Gateway Flags Refs Use Netif Expire 0/1 10.8.0.5 UGSc 8 48 tun0 default 192.168.1.1 UGSc 2 1652 en1 10.8.0.1/32 10.8.0.5 UGSc 1 0 tun0 10.8.0.5 10.8.0.6 UHr 13 0 tun0 10.10.0/24 10.8.0.5 UGSc 0 0 tun0 <snip> Traceroute from client: $ traceroute 10.10.0.9 traceroute to 10.10.0.9 (10.10.0.9), 64 hops max, 52 byte packets 1 10.8.0.1 (10.8.0.1) 5.403 ms 1.173 ms 1.086 ms 2 192.168.1.1 (192.168.1.1) 4.693 ms 2.110 ms 1.990 ms 3 l100.my-verizon-garbage (client-ext-ip) 7.453 ms 7.089 ms 6.248 ms 4 * * * 5 10.10.0.9 (10.10.0.9) 14.915 ms !N * 6.620 ms !N

    Read the article

  • can't connect 2 subnets through RRAS 2008 r2

    - by mcdwight6
    I'm working on a project for a networking class. In VMWare Workstation, I have to set up a 2008 r2 server with DHCP reservations for 2 clients on separate subnets and have them ping each other. Here is the output of the route print command: =========================================================================== Interface List 13 ...00 50 56 2a e7 11 ...... Intel(R) PRO/1000 MT Network Connection #3 10 ...00 0c 29 66 88 dd ...... Intel(R) PRO/1000 MT Network Connection 1 ........................... Software Loopback Interface 1 24 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter 11 ...02 00 54 55 4e 01 ...... Teredo Tunneling Pseudo-Interface 14 ...00 00 00 00 00 00 00 e0 6TO4 Adapter 16 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #2 17 ...00 00 00 00 00 00 00 e0 isatap.{5B8FB196-616F-4168-A020-03E63A309CEC} =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 On-link 10.0.0.2 266 0.0.0.0 0.0.0.0 On-link 223.6.6.2 266 10.0.0.0 255.0.0.0 On-link 10.0.0.2 266 10.0.0.2 255.255.255.255 On-link 10.0.0.2 266 10.255.255.255 255.255.255.255 On-link 10.0.0.2 266 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 128.6.0.0 255.255.0.0 On-link 10.0.0.2 11 128.6.255.255 255.255.255.255 On-link 10.0.0.2 266 223.6.6.0 255.255.255.0 On-link 10.0.0.2 11 223.6.6.0 255.255.255.0 On-link 223.6.6.2 266 223.6.6.2 255.255.255.255 On-link 223.6.6.2 266 223.6.6.255 255.255.255.255 On-link 10.0.0.2 266 223.6.6.255 255.255.255.255 On-link 223.6.6.2 266 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 10.0.0.2 266 224.0.0.0 240.0.0.0 On-link 223.6.6.2 266 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 10.0.0.2 266 255.255.255.255 255.255.255.255 On-link 223.6.6.2 266 =========================================================================== Persistent Routes: Network Address Netmask Gateway Address Metric 0.0.0.0 0.0.0.0 10.0.0.2 Default 0.0.0.0 0.0.0.0 128.6.0.2 Default 0.0.0.0 0.0.0.0 223.6.6.2 Default 128.6.0.0 255.255.0.0 10.0.0.2 1 223.6.6.0 255.255.255.0 10.0.0.2 1 =========================================================================== IPv6 Route Table =========================================================================== Active Routes: If Metric Network Destination Gateway 1 306 ::1/128 On-link 14 1010 2002::/16 On-link 14 266 2002:8006:2::8006:2/128 On-link 1 306 ff00::/8 On-link =========================================================================== Persistent Routes: None My problem is that although I have set up both dynamic and persistent static routes in my r2 server, neither of the clients can ping even the NIC outside its own subnet. For example Client A can ping the NIC at 10.0.0.2 and vice-versa, but it gets a general transmit failure when it tries to ping the card at 223.6.6.2, let alone trying to ping the other client. I have completely disabled the firewalls on all machines and anything else I could think of, without success. What am I missing? Edit: Since posting this, I also noticed that the default gateways on my 2 NICs keep getting zeroed out. Does anyone know a fix for this?

    Read the article

  • Is this valid JFS partition?

    - by Coolmax
    This is my first question on StackExchange. My teacher gave my his laptop (with Fedora 16 on it) and compact flash card with data. He want to have access to files on card, but he couldn't get access to it. The problem is Linux don't know what type of partion is. I suppose there is JFS: root@debian:~# dmesg |grep sdc [ 9066.908223] sd 3:0:0:1: [sdc] 3940272 512-byte logical blocks: (2.01 GB/1.87 GiB) [ 9066.962307] sd 3:0:0:1: [sdc] Write Protect is off [ 9066.962310] sd 3:0:0:1: [sdc] Mode Sense: 03 00 00 00 [ 9066.962312] sd 3:0:0:1: [sdc] Assuming drive cache: write through [ 9067.028420] sd 3:0:0:1: [sdc] Assuming drive cache: write through [ 9067.028637] sdc: unknown partition table [ 9067.097065] sd 3:0:0:1: [sdc] Assuming drive cache: write through [ 9067.097281] sd 3:0:0:1: [sdc] Attached SCSI removable disk and some of data: root@debian:~# hexdump -Cn 65536 /dev/sdc 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00008000 4a 46 53 31 01 00 00 00 48 63 0e 00 00 00 00 00 |JFS1....Hc......| 00008010 00 10 00 00 0c 00 03 00 00 02 00 00 09 00 00 00 |................| 00008020 00 20 00 00 00 09 20 10 02 00 00 00 00 00 00 00 |. .... .........| 00008030 04 00 00 00 26 00 00 00 02 00 00 00 24 00 00 00 |....&.......$...| 00008040 41 03 00 00 16 00 00 00 00 02 00 00 a0 cc 01 00 |A...............| 00008050 37 00 00 00 69 cc 01 00 b6 d8 ac 4b 00 00 00 00 |7...i......K....| 00008060 32 00 00 00 02 00 00 00 00 00 00 00 00 00 00 00 |2...............| 00008070 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00008080 00 00 00 00 00 00 00 00 90 15 e5 5f e3 c4 45 fa |..........._..E.| 00008090 9d 6a 5c b5 4f da 62 1a 00 00 00 00 00 00 00 00 |.j\.O.b.........| 000080a0 00 00 00 00 00 00 00 00 c3 c9 01 00 ed 81 00 00 |................| 000080b0 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 000080c0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * [cut] * 0000f000 4a 46 53 31 01 00 00 00 48 63 0e 00 00 00 00 00 |JFS1....Hc......| 0000f010 00 10 00 00 0c 00 03 00 00 02 00 00 09 00 00 00 |................| 0000f020 00 20 00 00 00 09 20 10 00 00 00 00 00 00 00 00 |. .... .........| 0000f030 04 00 00 00 26 00 00 00 02 00 00 00 24 00 00 00 |....&.......$...| 0000f040 41 03 00 00 16 00 00 00 00 02 00 00 a0 cc 01 00 |A...............| 0000f050 37 00 00 00 69 cc 01 00 b6 d8 ac 4b 00 00 00 00 |7...i......K....| 0000f060 32 00 00 00 02 00 00 00 00 00 00 00 00 00 00 00 |2...............| 0000f070 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 0000f080 00 00 00 00 00 00 00 00 90 15 e5 5f e3 c4 45 fa |..........._..E.| 0000f090 9d 6a 5c b5 4f da 62 1a 00 00 00 00 00 00 00 00 |.j\.O.b.........| 0000f0a0 00 00 00 00 00 00 00 00 c3 c9 01 00 ed 81 00 00 |................| 0000f0b0 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 0000f0c0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00010000 I'm total newbie to filesystems. I googled and found that JFS superblock may starts on 0x8000 offset. But what next? How to mount this card? If there would be normal partition table I would expect 55 AA on 510th and 511th byte, but first 8000 bytes are clear. Any help would be greatly appreciated. And sorry for my bad english :) Kind regards.

    Read the article

  • Set source address to use tun device does not work (Debian Squeeze)

    - by A. Donda
    there have been similar questions on StackExchange but none of the answers helped me, so I'll try a question of my own. I have a VPN connection via OpenVPN. By default, all traffic is redirected through the tunnel using OpenVPN's "two more specific routes" trick, but I disabled that. My routing table is like this: 198.144.156.141 192.168.2.1 255.255.255.255 UGH 0 0 0 eth0 10.30.92.5 0.0.0.0 255.255.255.255 UH 0 0 0 tun1 10.30.92.1 10.30.92.5 255.255.255.255 UGH 0 0 0 tun1 192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 0.0.0.0 10.30.92.5 0.0.0.0 UG 0 0 0 tun1 0.0.0.0 192.168.2.1 0.0.0.0 UG 0 0 0 eth0 And the interface configuration is like this: # ifconfig eth0 Link encap:Ethernet HWaddr XX-XX- inet addr:192.168.2.100 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::211:9ff:fe8d:acbd/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:394869 errors:0 dropped:0 overruns:0 frame:0 TX packets:293489 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:388519578 (370.5 MiB) TX bytes:148817487 (141.9 MiB) Interrupt:20 Base address:0x6f00 tun1 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.30.92.6 P-t-P:10.30.92.5 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:64 errors:0 dropped:0 overruns:0 frame:0 TX packets:67 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:9885 (9.6 KiB) TX bytes:4380 (4.2 KiB) plus the lo device. The routing table has two default routes, one via eth0 through my local network router (DSL modem) at 192.168.2.1, and another via tun1 through the VPN's gateway. With this configuration, if I connect to a site, the route chosen is the direct one (because it has less hops?): # traceroute 8.8.8.8 -n traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets 1 192.168.2.1 0.427 ms 0.491 ms 0.610 ms 2 213.191.89.13 17.981 ms 20.137 ms 22.141 ms 3 62.109.108.48 23.681 ms 25.009 ms 26.401 ms ... This is fine, because my goal is to send only traffic from specific applications through the tunnel (esp. transmission, using its -i / bind-address-ipv4 option). To test whether this can work at all, I check it first with traceroute's -s option: # traceroute 8.8.8.8 -n -s 10.30.92.6 traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets 1 * * * 2 * * * 3 * * * ... This I take to mean that connection using the tunnel's local address as source is not possible. What is possible (though only as root) is to specify the source interface: # traceroute 8.8.8.8 -n -i tun1 traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets 1 10.30.92.1 129.337 ms 297.758 ms 297.725 ms 2 * * * 3 198.144.152.17 297.653 ms 297.652 ms 297.650 ms ... So apparently the tun1 interface is working and it is possible to send packets through it. But selecting the source interface is not implemented in my actual target application (transmission), so I would like to get source address selection to work. What am I doing wrong?

    Read the article

  • MySQL reserves too much RAM

    - by Buddy
    I have a cheap VPS with 128Mb RAM and 256Mb burst. MySQL starts and reserves about 110Mb, but uses not more than 20Mb of them. My VPS Control Panel shows, that I use 127Mb (I also running nginx and sphinx), I know, that it shows reserved RAM, but when I reach over 128Mb, my VPS reboots automatically every 4 hours. So I want to force MySQL to reserve less RAM. How can i do that? I did some tweaks with my.conf but it helped not so much. top output: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 15 0 2156 668 572 S 0.0 0.3 0:00.03 init 11311 root 15 0 11212 356 228 S 0.0 0.1 0:00.00 vzctl 11312 root 18 0 3712 1484 1248 S 0.0 0.6 0:00.01 bash 11347 root 18 0 2284 916 732 R 0.0 0.3 0:00.00 top 13978 root 17 -4 2248 552 344 S 0.0 0.2 0:00.00 udevd 14262 root 15 0 1812 564 472 S 0.0 0.2 0:00.03 syslogd 14293 sphinx 15 0 11816 1172 672 S 0.0 0.4 0:00.07 searchd 14305 root 25 0 7192 1036 636 S 0.0 0.4 0:00.00 sshd 14321 root 25 0 2832 836 668 S 0.0 0.3 0:00.00 xinetd 15389 root 18 0 3708 1300 1132 S 0.0 0.5 0:00.00 mysqld_safe 15441 mysql 15 0 113m 16m 4440 S 0.0 6.4 0:00.15 mysqld 15489 root 21 0 13056 1456 340 S 0.0 0.6 0:00.00 nginx 15490 nginx 18 0 13328 2388 992 S 0.0 0.9 0:00.06 nginx 15507 nginx 25 0 19520 5888 4244 S 0.0 2.2 0:00.00 php-cgi 15508 nginx 18 0 19636 4876 2748 S 0.0 1.9 0:00.12 php-cgi 15509 nginx 15 0 19668 4872 2716 S 0.0 1.9 0:00.11 php-cgi 15518 root 18 0 4492 1116 568 S 0.0 0.4 0:00.01 crond MySQL tuner: >> MySQLTuner 1.0.1 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering Please enter your MySQL administrative login: root Please enter your MySQL administrative password: -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.0.77 [OK] Operating on 32-bit architecture with less than 2GB RAM -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in InnoDB tables: 1M (Tables: 1) [OK] Total fragmented tables: 0 -------- Performance Metrics ------------------------------------------------- [--] Up for: 38m 43s (37 q [0.016 qps], 20 conn, TX: 4M, RX: 3K) [--] Reads / Writes: 100% / 0% [--] Total buffers: 28.1M global + 832.0K per thread (100 max threads) [OK] Maximum possible memory usage: 109.4M (42% of installed RAM) [OK] Slow queries: 0% (0/37) [OK] Highest usage of available connections: 1% (1/100) [OK] Key buffer size / total MyISAM indexes: 128.0K/64.0K [OK] Query cache efficiency: 42.1% (8 cached / 19 selects) [OK] Query cache prunes per day: 0 [!!] Temporary tables created on disk: 27% (3 on disk / 11 total) [!!] Thread cache is disabled [OK] Table cache hit rate: 57% (8 open / 14 opened) [OK] Open file limit used: 1% (12/1K) [OK] Table locks acquired immediately: 100% (22 immediate / 22 locks) [!!] Connections aborted: 10% [OK] InnoDB data size / buffer pool: 1.5M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries When making adjustments, make tmp_table_size/max_heap_table_size equal Reduce your SELECT DISTINCT queries without LIMIT clauses Set thread_cache_size to 4 as a starting value Your applications are not closing MySQL connections properly Variables to adjust: tmp_table_size (> 32M) max_heap_table_size (> 16M) thread_cache_size (start at 4) I think if I do what MySQLtuner says, MySQL will use more RAM.

    Read the article

  • How broken is routing strategy that causes a martian packet (so far only) during tracepath?

    - by lkraav
    I believe I've achieved a table that routes packets from and to eth1/192.168.3.x through 192.168.3.1, and packets from and to eth0/192.168.1.x through 192.168.1.1 (helpful source). Question: when doing tracepath from 192.168.3.20 (from within vserver), I'm getting kernel: [318535.927489] martian source 192.168.3.20 from 212.47.223.33, on dev eth0 at or near the target IP, while intermediary hops go without (log below). I don't understand why this packet is arriving on eth0, instead of eth1, even after reading this: Note that you may see packets from non-routable IP addresses when running the traceroute or tracepath commands. While packets cannot be routed to these routers, packets sent between 2 routers only need to know the address of the next hop within the local networks, which could be a non-routable address. Can someone explain that paragraph in human language? Based on short initial trials so far, everything else seems to work without causing martians. Is this contained to the nature of tracepath operation or do I have some other bigger routing problem that will cause work traffic breakage? Side note: is it possible to inspect martian packet with tcpdump or wireshark or anything of the sort? I'm have not been able to get it to show up on my own. vserver-20 / # tracepath -n 212.47.223.33 1: 192.168.3.2 0.064ms pmtu 1500 1: 192.168.3.1 1.076ms 1: 192.168.3.1 1.259ms 2: 90.191.8.2 1.908ms 3: 90.190.134.194 2.595ms 4: 194.126.123.94 2.136ms asymm 5 5: 195.250.170.22 2.266ms asymm 6 6: 212.47.201.86 2.390ms asymm 7 7: no reply 8: no reply 9: no reply ^C Host routing: $ sudo ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo 2: sit0: <NOARP> mtu 1480 qdisc noop state DOWN link/sit 0.0.0.0 brd 0.0.0.0 3: eth0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:24:1d:de:b3:5d brd ff:ff:ff:ff:ff:ff inet 192.168.1.2/24 scope global eth0 4: eth1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:46:46:a3:6a brd ff:ff:ff:ff:ff:ff inet 192.168.3.2/27 scope global eth1 inet 192.168.3.20/27 brd 192.168.3.31 scope global secondary eth1 # linux-vserver instance $ sudo ip route default via 192.168.1.1 dev eth0 metric 3 unreachable 127.0.0.0/8 scope host 192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.2 192.168.3.0/27 dev eth1 proto kernel scope link src 192.168.3.2 $ sudo ip rule 0: from all lookup local 32764: from all to 192.168.3.0/27 lookup dmz 32765: from 192.168.3.0/27 lookup dmz 32766: from all lookup main 32767: from all lookup default $ sudo ip route show table dmz default via 192.168.3.1 dev eth1 metric 4 192.168.3.0/27 dev eth1 scope link metric 4 Gateway routing # ip route 10.24.0.2 dev tun0 proto kernel scope link src 10.24.0.1 10.24.0.0/24 via 10.24.0.2 dev tun0 192.168.3.0/24 dev br-dmz proto kernel scope link src 192.168.3.1 192.168.1.0/24 dev br-lan proto kernel scope link src 192.168.1.1 $ISP_NET/23 dev eth0.1 proto kernel scope link src $WAN_IP default via $ISP_GW dev eth0.1 Additional background Options for non-virtualized network interface isolation?

    Read the article

  • From Binary to Data Structures

    - by Cédric Menzi
    Table of Contents Introduction PE file format and COFF header COFF file header BaseCoffReader Byte4ByteCoffReader UnsafeCoffReader ManagedCoffReader Conclusion History This article is also available on CodeProject Introduction Sometimes, you want to parse well-formed binary data and bring it into your objects to do some dirty stuff with it. In the Windows world most data structures are stored in special binary format. Either we call a WinApi function or we want to read from special files like images, spool files, executables or may be the previously announced Outlook Personal Folders File. Most specifications for these files can be found on the MSDN Libarary: Open Specification In my example, we are going to get the COFF (Common Object File Format) file header from a PE (Portable Executable). The exact specification can be found here: PECOFF PE file format and COFF header Before we start we need to know how this file is formatted. The following figure shows an overview of the Microsoft PE executable format. Source: Microsoft Our goal is to get the PE header. As we can see, the image starts with a MS-DOS 2.0 header with is not important for us. From the documentation we can read "...After the MS DOS stub, at the file offset specified at offset 0x3c, is a 4-byte...". With this information we know our reader has to jump to location 0x3c and read the offset to the signature. The signature is always 4 bytes that ensures that the image is a PE file. The signature is: PE\0\0. To prove this we first seek to the offset 0x3c, read if the file consist the signature. So we need to declare some constants, because we do not want magic numbers.   private const int PeSignatureOffsetLocation = 0x3c; private const int PeSignatureSize = 4; private const string PeSignatureContent = "PE";   Then a method for moving the reader to the correct location to read the offset of signature. With this method we always move the underlining Stream of the BinaryReader to the start location of the PE signature.   private void SeekToPeSignature(BinaryReader br) { // seek to the offset for the PE signagure br.BaseStream.Seek(PeSignatureOffsetLocation, SeekOrigin.Begin); // read the offset int offsetToPeSig = br.ReadInt32(); // seek to the start of the PE signature br.BaseStream.Seek(offsetToPeSig, SeekOrigin.Begin); }   Now, we can check if it is a valid PE image by reading of the next 4 byte contains the content PE.   private bool IsValidPeSignature(BinaryReader br) { // read 4 bytes to get the PE signature byte[] peSigBytes = br.ReadBytes(PeSignatureSize); // convert it to a string and trim \0 at the end of the content string peContent = Encoding.Default.GetString(peSigBytes).TrimEnd('\0'); // check if PE is in the content return peContent.Equals(PeSignatureContent); }   With this basic functionality we have a good base reader class to try the different methods of parsing the COFF file header. COFF file header The COFF header has the following structure: Offset Size Field 0 2 Machine 2 2 NumberOfSections 4 4 TimeDateStamp 8 4 PointerToSymbolTable 12 4 NumberOfSymbols 16 2 SizeOfOptionalHeader 18 2 Characteristics If we translate this table to code, we get something like this:   [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] public struct CoffHeader { public MachineType Machine; public ushort NumberOfSections; public uint TimeDateStamp; public uint PointerToSymbolTable; public uint NumberOfSymbols; public ushort SizeOfOptionalHeader; public Characteristic Characteristics; } BaseCoffReader All readers do the same thing, so we go to the patterns library in our head and see that Strategy pattern or Template method pattern is sticked out in the bookshelf. I have decided to take the template method pattern in this case, because the Parse() should handle the IO for all implementations and the concrete parsing should done in its derived classes.   public CoffHeader Parse() { using (var br = new BinaryReader(File.Open(_fileName, FileMode.Open, FileAccess.Read, FileShare.Read))) { SeekToPeSignature(br); if (!IsValidPeSignature(br)) { throw new BadImageFormatException(); } return ParseInternal(br); } } protected abstract CoffHeader ParseInternal(BinaryReader br);   First we open the BinaryReader, seek to the PE signature then we check if it contains a valid PE signature and rest is done by the derived implementations. Byte4ByteCoffReader The first solution is using the BinaryReader. It is the general way to get the data. We only need to know which order, which data-type and its size. If we read byte for byte we could comment out the first line in the CoffHeader structure, because we have control about the order of the member assignment.   protected override CoffHeader ParseInternal(BinaryReader br) { CoffHeader coff = new CoffHeader(); coff.Machine = (MachineType)br.ReadInt16(); coff.NumberOfSections = (ushort)br.ReadInt16(); coff.TimeDateStamp = br.ReadUInt32(); coff.PointerToSymbolTable = br.ReadUInt32(); coff.NumberOfSymbols = br.ReadUInt32(); coff.SizeOfOptionalHeader = (ushort)br.ReadInt16(); coff.Characteristics = (Characteristic)br.ReadInt16(); return coff; }   If the structure is as short as the COFF header here and the specification will never changed, there is probably no reason to change the strategy. But if a data-type will be changed, a new member will be added or ordering of member will be changed the maintenance costs of this method are very high. UnsafeCoffReader Another way to bring the data into this structure is using a "magically" unsafe trick. As above, we know the layout and order of the data structure. Now, we need the StructLayout attribute, because we have to ensure that the .NET Runtime allocates the structure in the same order as it is specified in the source code. We also need to enable "Allow unsafe code (/unsafe)" in the project's build properties. Then we need to add the following constructor to the CoffHeader structure.   [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] public struct CoffHeader { public CoffHeader(byte[] data) { unsafe { fixed (byte* packet = &data[0]) { this = *(CoffHeader*)packet; } } } }   The "magic" trick is in the statement: this = *(CoffHeader*)packet;. What happens here? We have a fixed size of data somewhere in the memory and because a struct in C# is a value-type, the assignment operator = copies the whole data of the structure and not only the reference. To fill the structure with data, we need to pass the data as bytes into the CoffHeader structure. This can be achieved by reading the exact size of the structure from the PE file.   protected override CoffHeader ParseInternal(BinaryReader br) { return new CoffHeader(br.ReadBytes(Marshal.SizeOf(typeof(CoffHeader)))); }   This solution is the fastest way to parse the data and bring it into the structure, but it is unsafe and it could introduce some security and stability risks. ManagedCoffReader In this solution we are using the same approach of the structure assignment as above. But we need to replace the unsafe part in the constructor with the following managed part:   [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] public struct CoffHeader { public CoffHeader(byte[] data) { IntPtr coffPtr = IntPtr.Zero; try { int size = Marshal.SizeOf(typeof(CoffHeader)); coffPtr = Marshal.AllocHGlobal(size); Marshal.Copy(data, 0, coffPtr, size); this = (CoffHeader)Marshal.PtrToStructure(coffPtr, typeof(CoffHeader)); } finally { Marshal.FreeHGlobal(coffPtr); } } }     Conclusion We saw that we can parse well-formed binary data to our data structures using different approaches. The first is probably the clearest way, because we know each member and its size and ordering and we have control about the reading the data for each member. But if add member or the structure is going change by some reason, we need to change the reader. The two other solutions use the approach of the structure assignment. In the unsafe implementation we need to compile the project with the /unsafe option. We increase the performance, but we get some security risks.

    Read the article

  • Basics of Join Predicate Pushdown in Oracle

    - by Maria Colgan
    Happy New Year to all of our readers! We hope you all had a great holiday season. We start the new year by continuing our series on Optimizer transformations. This time it is the turn of Predicate Pushdown. I would like to thank Rafi Ahmed for the content of this blog.Normally, a view cannot be joined with an index-based nested loop (i.e., index access) join, since a view, in contrast with a base table, does not have an index defined on it. A view can only be joined with other tables using three methods: hash, nested loop, and sort-merge joins. Introduction The join predicate pushdown (JPPD) transformation allows a view to be joined with index-based nested-loop join method, which may provide a more optimal alternative. In the join predicate pushdown transformation, the view remains a separate query block, but it contains the join predicate, which is pushed down from its containing query block into the view. The view thus becomes correlated and must be evaluated for each row of the outer query block. These pushed-down join predicates, once inside the view, open up new index access paths on the base tables inside the view; this allows the view to be joined with index-based nested-loop join method, thereby enabling the optimizer to select an efficient execution plan. The join predicate pushdown transformation is not always optimal. The join predicate pushed-down view becomes correlated and it must be evaluated for each outer row; if there is a large number of outer rows, the cost of evaluating the view multiple times may make the nested-loop join suboptimal, and therefore joining the view with hash or sort-merge join method may be more efficient. The decision whether to push down join predicates into a view is determined by evaluating the costs of the outer query with and without the join predicate pushdown transformation under Oracle's cost-based query transformation framework. The join predicate pushdown transformation applies to both non-mergeable views and mergeable views and to pre-defined and inline views as well as to views generated internally by the optimizer during various transformations. The following shows the types of views on which join predicate pushdown is currently supported. UNION ALL/UNION view Outer-joined view Anti-joined view Semi-joined view DISTINCT view GROUP-BY view Examples Consider query A, which has an outer-joined view V. The view cannot be merged, as it contains two tables, and the join between these two tables must be performed before the join between the view and the outer table T4. A: SELECT T4.unique1, V.unique3 FROM T_4K T4,            (SELECT T10.unique3, T10.hundred, T10.ten             FROM T_5K T5, T_10K T10             WHERE T5.unique3 = T10.unique3) VWHERE T4.unique3 = V.hundred(+) AND       T4.ten = V.ten(+) AND       T4.thousand = 5; The following shows the non-default plan for query A generated by disabling join predicate pushdown. When query A undergoes join predicate pushdown, it yields query B. Note that query B is expressed in a non-standard SQL and shows an internal representation of the query. B: SELECT T4.unique1, V.unique3 FROM T_4K T4,           (SELECT T10.unique3, T10.hundred, T10.ten             FROM T_5K T5, T_10K T10             WHERE T5.unique3 = T10.unique3             AND T4.unique3 = V.hundred(+)             AND T4.ten = V.ten(+)) V WHERE T4.thousand = 5; The execution plan for query B is shown below. In the execution plan BX, note the keyword 'VIEW PUSHED PREDICATE' indicates that the view has undergone the join predicate pushdown transformation. The join predicates (shown here in red) have been moved into the view V; these join predicates open up index access paths thereby enabling index-based nested-loop join of the view. With join predicate pushdown, the cost of query A has come down from 62 to 32.  As mentioned earlier, the join predicate pushdown transformation is cost-based, and a join predicate pushed-down plan is selected only when it reduces the overall cost. Consider another example of a query C, which contains a view with the UNION ALL set operator.C: SELECT R.unique1, V.unique3 FROM T_5K R,            (SELECT T1.unique3, T2.unique1+T1.unique1             FROM T_5K T1, T_10K T2             WHERE T1.unique1 = T2.unique1             UNION ALL             SELECT T1.unique3, T2.unique2             FROM G_4K T1, T_10K T2             WHERE T1.unique1 = T2.unique1) V WHERE R.unique3 = V.unique3 and R.thousand < 1; The execution plan of query C is shown below. In the above, 'VIEW UNION ALL PUSHED PREDICATE' indicates that the UNION ALL view has undergone the join predicate pushdown transformation. As can be seen, here the join predicate has been replicated and pushed inside every branch of the UNION ALL view. The join predicates (shown here in red) open up index access paths thereby enabling index-based nested loop join of the view. Consider query D as an example of join predicate pushdown into a distinct view. We have the following cardinalities of the tables involved in query D: Sales (1,016,271), Customers (50,000), and Costs (787,766).  D: SELECT C.cust_last_name, C.cust_city FROM customers C,            (SELECT DISTINCT S.cust_id             FROM sales S, costs CT             WHERE S.prod_id = CT.prod_id and CT.unit_price > 70) V WHERE C.cust_state_province = 'CA' and C.cust_id = V.cust_id; The execution plan of query D is shown below. As shown in XD, when query D undergoes join predicate pushdown transformation, the expensive DISTINCT operator is removed and the join is converted into a semi-join; this is possible, since all the SELECT list items of the view participate in an equi-join with the outer tables. Under similar conditions, when a group-by view undergoes join predicate pushdown transformation, the expensive group-by operator can also be removed. With the join predicate pushdown transformation, the elapsed time of query D came down from 63 seconds to 5 seconds. Since distinct and group-by views are mergeable views, the cost-based transformation framework also compares the cost of merging the view with that of join predicate pushdown in selecting the most optimal execution plan. Summary We have tried to illustrate the basic ideas behind join predicate pushdown on different types of views by showing example queries that are quite simple. Oracle can handle far more complex queries and other types of views not shown here in the examples. Again many thanks to Rafi Ahmed for the content of this blog post.

    Read the article

  • TFS 2010 Basic Concepts

    - by jehan
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Here, I’m going to discuss some key Architectural changes and concepts that have taken place in TFS 2010 when compared to TFS 2008. In TFS 2010 Installation, First you need to do the Installation and then you have to configure the Installation Feature from the available features. This is bit similar to SharePoint Installation, where you will first do the Installation and then configure the SharePoint Farms. 1) Installation Features available in TFS2010: a) Basic: It is the most compact TFS installation possible. It will install and configure Source Control, Work Item tracking and Build Services only. (SharePoint and Reporting Integration will not be possible). b) Standard Single Server: This is suitable for Single Server deployment of TFS. It will install and configure Windows SharePoint Services for you and will use the default instance of SQL Server. c) Advanced: It is suitable, if you want use Remote Servers for SQL Server Databases, SharePoint Products and Technologies and SQL Server Reporting Services. d) Application Tier Only: If you want to configure high availability for Team Foundation Server in a Load Balanced Environment (NLB) or you want to move Team Foundation Server from one server to other or you want to restore TFS. e) Upgrade: If you want to upgrade from a prior version of TFS. Note: One more important thing to know here about  TFS 2010 Basic is that,  it can be installed on Client Operations Systems(Windows 7 and Windows Vista SP3), Where as  earlier you cannot Install previous version of TFS (2008 and 2005) on client OS. 2) Team Project Collections: Connect to TFS dialog box in TFS 2008:  In TFS 2008, the TFS Server contains a set of Team Projects and each project may or may not be independent of other projects and every checkin gets a ever increasing  changeset ID  irrespective of the team project in which it is checked in and the same applies to work items  also, who also gets unique Work Item Ids.The main problem with this approach was that there are certain things which were impossible to do; those were required as per the Application Development Process. a)      If something has gone wrong in one team project and now you want to restore it back to earlier state where it was working properly then it requires you to restore the Database of Team Foundation Server from the backup you have taken as per your Maintenance plans and because of this the other team projects may lose out on the work which is not backed up. b)       Your company had a merge with some other company and now you have two TFS servers. One TFS Server which you are working on and other TFS server which other company was working and now after the merge you want to integrate the team projects from two TFS servers into one, which is almost impossible to achieve in TFS 2008. Though you can create the Team Projects in one server manually (In Source Control) which you want to integrate from the other TFS Server, but will lose out on History of Change Sets and Work items and others which are very important. There were few more issues of this sort, which were difficult to resolve in TFS 2008. To resolve issues related to above kind of scenarios which were mainly related TFS Maintenance, Integration, migration and Security,  Microsoft has come up with Team Project Collections concept in TFS 2010.This concept is similar to SharePoint Site Collections and if you are familiar with SharePoint Architecture, then it will help you to understand TFS 2010 Architecture easily. Connect to TFS dialog box in TFS 2010: In above dialog box as you can see there are two Team Project Collections, each team project can contain any number of team projects as you can see on right side it shows the two Team Projects in Team Project Collection (Default Collection) which I have chosen. Note: You can connect to only one Team project Collection at a time using an instance of  TFS Team Explorer. How does it work? To introduce Team Project Collections, changes have been done in reorganization of TFS databases. TFS 2008 was composed of 5-7 databases partitioned by subsystem (each for Version Control, Work Item Tracking, Build, Integration, Project Management...) New TFS 2010 database architecture: TFS_Config: It’s the root database and it contains centralized TFS configuration data, including the list of all team projects exist in TFS server. TFS_Warehouse: The data warehouse contains all the reporting data of served by this server (farm). TFS_* : This contains individual team project collection data. This database contains all the operational data of team project collection regardless of subsystem.In additional to this, you will have databases for SharePoint and Report Server. 3) TFS Farms:  As TFS 2010 is more flexible to configure as multiple Application tiers and multiple Database tiers, so it will be more appropriate to call as TFS Farm if you going for multi server installation of TFS. NLB support for TFS application tiers – With TFS 2010: you can configure multiple TFS application tier machines to serve the same set of Team Project Collections. The primary purpose of NLB support is to enable a cleaner and more complete high availability than in TFS 2008. Even if any application tier in the farm fails then farm will automatically continue to work with hardly any indication to end users of a problem. SQL data tiers: With 2010 you can configure many SQL Servers. Each Database can be configured to be on any SQL Server because each Team Project Collection is an independent database. This feature can also be used to load balance databases across SQL Servers.These new capabilities will significantly change the way enterprises manage their TFS installations in the future. With Team Project Collections and TFS farms, you can create a single, arbitrarily large TFS installation. You can grow it incrementally by adding ATs and SQL Servers as needed.

    Read the article

  • SQL Server 2008 R2 Reporting Services - The Word is But a Stage (T-SQL Tuesday #006)

    - by smisner
    Host Michael Coles (blog|twitter) has selected LOB data as the topic for this month's T-SQL Tuesday, so I'll take this opportunity to post an overview of reporting with spatial data types. As part of my work with SQL Server 2008 R2 Reporting Services, I've been exploring the use of spatial data types in the new map data region. You can create a map using any of the following data sources: Map Gallery - a set of Shapefiles for the United States only that ships with Reporting Services ESRI Shapefile - a .shp file conforming to the Environmental Systems Research Institute, Inc. (ESRI) shapefile spatial data format SQL Server spatial data - a query that includes SQLGeography or SQLGeometry data types Rob Farley (blog|twitter) points out today in his T-SQL Tuesday post that using the SQL geography field is a preferable alternative to ESRI shapefiles for storing spatial data in SQL Server. So how do you get spatial data? If you don't already have a GIS application in-house, you can find a variety of sources. Here are a few to get you started: US Census Bureau Website, http://www.census.gov/geo/www/tiger/ Global Administrative Areas Spatial Database, http://biogeo.berkeley.edu/gadm/ Digital Chart of the World Data Server, http://www.maproom.psu.edu/dcw/ In a recent post by Pinal Dave (blog|twitter), you can find a link to free shapefiles for download and a tutorial for using Shape2SQL, a free tool to convert shapefiles into SQL Server data. In my post today, I'll show you how to use combine spatial data that describes boundaries with spatial data in AdventureWorks2008R2 that identifies stores locations to embed a map in a report. Preparing the spatial data First, I downloaded Shapefile data for the administrative boundaries in France and unzipped the data to a local folder. Then I used Shape2SQL to upload the data into a SQL Server database called Spatial. I'm not sure of the reason why, but I had to uncheck the option to create a spatial index to upload the data. Otherwise, the upload appeared to run successfully, but no table appeared in my database. The zip file that I downloaded contained three files, but I didn't know what was in them until I used Shape2SQL to upload the data into tables. Then I found that FRA_adm0 contains spatial data for the country of France, FRA_adm1 contains spatial data for each region, and FRA_adm2 contains spatial data for each department (a subdivision of region). Next I prepared my SQL query containing sales data for fictional stores selling Adventure Works products in France. The Person.Address table in the AdventureWorks2008R2 database (which you can download from Codeplex) contains a SpatialLocation column which I joined - along with several other tables - to the Sales.Customer and Sales.Store tables. I'll be able to superimpose this data on a map to see where these stores are located. I included the SQL script for this query (as well as the spatial data for France) in the downloadable project that I created for this post. Step 1: Using the Map Wizard to Create a Map of France You can build a map without using the wizard, but I find it's rather useful in this case. Whether you use Business Intelligence Development Studio (BIDS) or Report Builder 3.0, the map wizard is the same. I used BIDS so that I could create a project that includes all the files related to this post. To get started, I added an empty report template to the project and named it France Stores. Then I opened the Toolbox window and dragged the Map item to the report body which starts the wizard. Here are the steps to perform to create a map of France: On the Choose a source of spatial data page of the wizard, select SQL Server spatial query, and click Next. On the Choose a dataset with SQL Server spatial data page, select Add a new dataset with SQL Server spatial data. On the Choose a connection to a SQL Server spatial data source page, select New. In the Data Source Properties dialog box, on the General page, add a connecton string like this (changing your server name if necessary): Data Source=(local);Initial Catalog=Spatial Click OK and then click Next. On the Design a query page, add a query for the country shape, like this: select * from fra_adm1 Click Next. The map wizard reads the spatial data and renders it for you on the Choose spatial data and map view options page, as shown below. You have the option to add a Bing Maps layer which shows surrounding countries. Depending on the type of Bing Maps layer that you choose to add (from Road, Aerial, or Hybrid) and the zoom percentage you select, you can view city names and roads and various boundaries. To keep from cluttering my map, I'm going to omit the Bing Maps layer in this example, but I do recommend that you experiment with this feature. It's a nice integration feature. Use the + or - button to rexize the map as needed. (I used the + button to increase the size of the map until its edges were just inside the boundaries of the visible map area (which is called the viewport). You can eliminate the color scale and distance scale boxes that appear in the map area later. Select the Embed map data in this report for faster rendering. The spatial data won't be changing, so there's no need to leave it in the database. However, it does increase the size of the RDL. Click Next. On the Choose map visualization page, select Basic Map. We'll add data for visualization later. For now, we have just the outline of France to serve as the foundation layer for our map. Click Next, and then click Finish. Now click the color scale box in the lower left corner of the map, and press the Delete key to remove it. Then repeat to remove the distance scale box in the lower right corner of the map. Step 2: Add a Map Layer to an Existing Map The map data region allows you to add multiple layers. Each layer is associated with a different data set. Thus far, we have the spatial data that defines the regional boundaries in the first map layer. Now I'll add in another layer for the store locations by following these steps: If the Map Layers windows is not visible, click the report body, and then click twice anywhere on the map data region to display it. Click on the New Layer Wizard button in the Map layers window. And then we start over again with the process by choosing a spatial data source. Select SQL Server spatial query, and click Next. Select Add a new dataset with SQL Server spatial data, and click Next. Click New, add a connection string to the AdventureWorks2008R2 database, and click Next. Add a query with spatial data (like the one I included in the downloadable project), and click Next. The location data now appears as another layer on top of the regional map created earlier. Use the + button to resize the map again to fill as much of the viewport as possible without cutting off edges of the map. You might need to drag the map within the viewport to center it properly. Select Embed map data in this report, and click Next. On the Choose map visualization page, select Basic Marker Map, and click Next. On the Choose color theme and data visualization page, in the Marker drop-down list, change the marker to diamond. There's no particular reason for a diamond; I think it stands out a little better than a circle on this map. Clear the Single color map checkbox as another way to distinguish the markers from the map. You can of course create an analytical map instead, which would change the size and/or color of the markers according to criteria that you specify, such as sales volume of each store, but I'll save that exploration for another post on another day. Click Finish and then click Preview to see the rendered report. Et voilà...c'est fini. Yes, it's a very simple map at this point, but there are many other things you can do to enhance the map. I'll create a series of posts to explore the possibilities. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Bye Bye Year of the Dragon, Hello BPM

    - by Ajay Khanna
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} As 2012 fades and we usher in a New Year, let’s look back at some of the hottest BPM trends and those we’ll be seeing more of in the coming months. BPM is as much about people as it is about technology. As people adopt new ways of engagement, new channels of communications and new devices to interact , the changes are reflected in BPM practices. As Social and Mobile have become an integral part of our personal and professional lives, we’ll see tighter integration of social and mobile with BPM, and more use cases emerging for smarter process management in 2013. And with products and services becoming less differentiated, organizations will strive to differentiate on Customer Experience. Concepts like Pace Layered Architecture and Dynamic Case Management will provide more flexibility and agility to IT groups and knowledge workers. Take a look at some of these capabilities we showcased (see video) at Oracle OpenWorld 2012. Some of these trends that will continue to gain momentum in 2013: Social networks and social media have provided a new way for businesses to engage with customers. A prospect is likely to reach out to their social network before making any purchase. Companies are increasingly engaging with customers in social networks to influence their purchasing decisions, as well as listening to customers via tools like sentiment analysis to see what customers think about a particular product or process. These insights are valuable as companies look to improve their processes. Inside organizations, workers are using social tools to engage with each other to design new products and processes. Social collaboration tools are being used to resolve issues where an employee needs consultation to reach a decision. Oracle BPM Suite includes social interaction as an integral part of its process design and work management to empower today’s business users. Ubiquitous smart mobile devices are trending as a tool of choice for many workers. Many companies are adopting the policy of “Bring Your Own Device,” and the device of choice is a tablet. Devices like smart phones and tablets not only provide mobility to workers and customers, but they also provide additional important information – the context. By integrating the mobile context (location, photos, and preferences) into your processes, organizations can make much more informed decisions, as well as offer more personalized service to customers. Using Oracle ADF Mobile, you can easily create user interfaces for mobile devices and also capture location data for process execution. Customer experience was at the forefront of trending topics in 2012. Organizations are trying to understand their customers better and offer them more personalized and differentiated services. Customer experience is paramount when companies design sales and support processes. Companies are looking to BPM to consistently and efficiently orchestrate customer facing processes across disparate systems, departments and channels of communication. Oracle BPM Suite provides just the right capabilities for organizations to design and deliver an excellent customer experience. Pace Layered Architecture strategy is gaining traction as a way to maximize agility and minimize disruption in organizations. It provides a framework to manage the evolution of your information system when different pieces of it are changing at different rates and need to be updated independent of one another. Oracle Fusion Middleware and Oracle BPM Suite are designed with this in mind. The database layer, integration layer, application layer, and process layer should not be required to change at the same time. Most of the business changes to policy or process can be done at the process layer without disrupting the whole infrastructure. By understanding the type of change needed at a particular level, organizations can become much more agile and efficient. Adaptive Case Management proposes more flexibility to manage processes or cases that do not follow a structured process flow. In such situations, the knowledge worker managing the case needs to evaluate what step should occur next because the sequence of steps can’t be predetermined. Another characteristic is that it requires much more collaboration than straight-through process. As simple processes become automated, and customers adopt more and more self-service, cases that reach the case workers are much more complex and need more investigation. Oracle BPM suite includes comprehensive adaptive case management capability to manage such unstructured and complex processes. Smart BPM or making your BPM intelligent has been the holy grail for BPM practitioners who imagined that one day BPM would become one with Business Intelligence, Business Activity Monitoring and Complex Event Processing, making it much more responsive and helpful in organizational decision making. In 2013, organizations will begin to deploy these intelligent BPM solutions. Oracle offers an integrated solution that brings together the powerful functionality of BI, BAM, event processing, and Real Time Decisions to help organizations create smart process based solutions. In order to help customers reach their BPM goals faster and remove risks associated with BPM initiatives, Oracle has introduced Oracle Process Accelerators, pre-built best practices applications built on Oracle BPM Suite that are fully production grade and ready to deploy. These are exiting times for BPM practitioners and there is so much to look forward to in 2013. We wish you a very happy and prosperous New Year 2013. Happy BPMing!

    Read the article

  • SSIS Lookup component tuning tips

    - by jamiet
    Yesterday evening I attended a London meeting of the UK SQL Server User Group at Microsoft’s offices in London Victoria. As usual it was both a fun and informative evening and in particular there seemed to be a few questions arising about tuning the SSIS Lookup component; I rattled off some comments and figured it would be prudent to drop some of them into a dedicated blog post, hence the one you are reading right now. Scene setting A popular pattern in SSIS is to use a Lookup component to determine whether a record in the pipeline already exists in the intended destination table or not and I cover this pattern in my 2006 blog post Checking if a row exists and if it does, has it changed? (note to self: must rewrite that blog post for SSIS2008). Fundamentally the SSIS lookup component (when using FullCache option) sucks some data out of a database and holds it in memory so that it can be compared to data in the pipeline. One of the big benefits of using SSIS dataflows is that they process data one buffer at a time; that means that not all of the data from your source exists in the dataflow at the same time and is why a SSIS dataflow can process data volumes that far exceed the available memory. However, that only applies to data in the pipeline; for reasons that are hopefully obvious ALL of the data in the lookup set must exist in the memory cache for the duration of the dataflow’s execution which means that any memory used by the lookup cache will not be available to be used as a pipeline buffer. Moreover, there’s an obvious correlation between the amount of data in the lookup cache and the time it takes to charge that cache; the more data you have then the longer it will take to charge and the longer you have to wait until the dataflow actually starts to do anything. For these reasons your goal is simple: ensure that the lookup cache contains as little data as possible. General tips Here is a simple tick list you can follow in order to tune your lookups: Use a SQL statement to charge your cache, don’t just pick a table from the dropdown list made available to you. (Read why in SELECT *... or select from a dropdown in an OLE DB Source component?) Only pick the columns that you need, ignore everything else Make the database columns that your cache is populated from as narrow as possible. If a column is defined as VARCHAR(20) then SSIS will allocate 20 bytes for every value in that column – that is a big waste if the actual values are significantly less than 20 characters in length. Do you need DT_WSTR typed columns or will DT_STR suffice? DT_WSTR uses twice the amount of space to hold values that can be stored using a DT_STR so if you can use DT_STR, consider doing so. Same principle goes for the numerical datatypes DT_I2/DT_I4/DT_I8. Only populate the cache with data that you KNOW you will need. In other words, think about your WHERE clause! Thinking outside the box It is tempting to build a large monolithic dataflow that does many things, one of which is a Lookup. Often though you can make better use of your available resources by, well, mixing things up a little and here are a few ideas to get your creative juices flowing: There is no rule that says everything has to happen in a single dataflow. If you have some particularly resource intensive lookups then consider putting that lookup into a dataflow all of its own and using raw files to pass the pipeline data in and out of that dataflow. Know your data. If you think, for example, that the majority of your incoming rows will match with only a small subset of your lookup data then consider chaining multiple lookup components together; the first would use a FullCache containing that data subset and the remaining data that doesn’t find a match could be passed to a second lookup that perhaps uses a NoCache lookup thus negating the need to pull all of that least-used lookup data into memory. Do you need to process all of your incoming data all at once? If you can process different partitions of your data separately then you can partition your lookup cache as well. For example, if you are using a lookup to convert a location into a [LocationId] then why not process your data one region at a time? This will mean your lookup cache only has to contain data for the location that you are currently processing and with the ability of the Lookup in SSIS2008 and beyond to charge the cache using a dynamically built SQL statement you’ll be able to achieve it using the same dataflow and simply loop over it using a ForEach loop. Taking the previous data partitioning idea further … a dataflow can contain more than one data path so why not split your data using a conditional split component and, again, charge your lookup caches with only the data that they need for that partition. Lookups have two uses: to (1) find a matching row from the lookup set and (2) put attributes from that matching row into the pipeline. Ask yourself, do you need to do these two things at the same time? After all once you have the key column(s) from your lookup set then you can use that key to get the rest of attributes further downstream, perhaps even in another dataflow. Are you using the same lookup data set multiple times? If so, consider the file caching option in SSIS 2008 and beyond. Above all, experiment and be creative with different combinations. You may be surprised at what works. Final  thoughts If you want to know more about how the Lookup component differs in SSIS2008 from SSIS2005 then I have a dedicated blog post about that at Lookup component gets a makeover. I am on a mini-crusade at the moment to get a BULK MERGE feature into the database engine, the thinking being that if the database engine can quickly merge massive amounts of data in a similar manner to how it can insert massive amounts using BULK INSERT then that’s a lot of work that wouldn’t have to be done in the SSIS pipeline. If you think that is a good idea then go and vote for BULK MERGE on Connect. If you have any other tips to share then please stick them in the comments. Hope this helps! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How Mary Meeker’s Latest Findings May Make You Re-Imagine Commerce

    - by Brenna Johnson-Oracle
    0 0 1 954 5439 Endeca Technologies 45 12 6381 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Today, Mary Meeker released her highly anticipated annual “Internet Trends” presentation for 2014. All 164 slides are jam-packed with pretty much everything you need to know about the state of the Internet. And as luck would have it, Oracle is staying ahead of these trends (but we’ll talk about that later). There were a few surprises, some stats to solidify what you likely already know, and Meeker’s novel observations about where we are all going. What interested me the most is not only how people are engaging in their personal lives, but how they engage with brands. As you could probably predict, Internet usage growth is slowing while tablet user and mobile data traffic growth continue their meteoric rise around the globe, with tremendous growth in underpenetrated markets like China, India, Brazil and Indonesia. Now hold those the “Internet is dead” comments. Keep in mind there’s still plenty of room to grow, and a multiscreen model is Meeker’s vision for our future. Despite 1.5x YOY growth for mobile traffic, mobile still only makes up about 23% of all traffic today. With tablet shipments easily outpacing figures for PCs even at their height (in 2007), mobile will only continue on it’s path, but won’t be everything to everyone. Mobile won’t replace every touchpoint, it’s just created our shorter attention spans and demand for simpler, more personal experiences. As Meeker points out TVs, tablets, PCs, and smartphones are used for different activities at present, but lines will blur (for example, 84% of smartphones owners use their device while watching TV). Day-to-day activities are being re-imagining through simple, beautiful user experiences. It seems like every day I discover a new way a brand/site/app made the most mundane or mounting task enjoyable and frictionless – and I’m not alone. Meeker points out the evolution of how we do everything from how we communicate, get information, use money, meet someone, get places, order a meal, and consume media is all done through new user interfaces that make day-to-day tasks simpler. This movement has caused just about everyone’s patience for a poor UX to take a nosedive. And it’s not just the digital user experience, technology is making a lot of people’s offline lives easier, and less expensive. Today 47% of online shopping utilizes free shipping— nearly half. And Meeker predicts same day local delivery will be the “next big thing” (and you can take a guess on who will own that). Content, Community and Commerce creates the “Internet Trifecta.” Meeker pointed out that when content, communities and commerce occur in a single experience it’s embraced by consumers, which translates to big dollars for brands. The magic happens when consumers can get inspired, research, and buy in a single experience. As the buying cycle has changed and touchpoints (Web, mobile, social, store) are no longer tied to “roles” or steps in the customer journey, brands must make all experiences (content and commerce) available in a single, adaptable experience. (We at Oracle Commerce have a lot to say on this topic – stay tuned!) And in what Meeker calls the “biggest re-imagination of all:” consumers enabled with smartphones and sensors are creating troves of findable and sharable data, which she says is in the early stages, by growing rapidly. She notes that transparency and patterns of consumers with this hardware (FYI - there are up to 10 sensors embedded in smartphones now) has created a Big Data treasure chest to be mined to improve business and the life of the consumer. The opportunities are endless. So what does it all mean for a company doing business online? Start thinking about how you can: Re-imagine your experience. Not your online experience and your mobile experience and your social experience – your overall experience. When consumers can research, buy, and advocate from anywhere (and their attention spans are at an all-time low) channels don’t exist. Enable simple and beautiful interactions informed by all of the online and offline data you leverage across your enterprise. Ethically leverage the endless supply of data (user generated content, clicks, purchases, in-store behavior, social activity) to make experiences more beautiful, more accurate, and more personalized (not to mention, more lucrative for you). Re-imagine content and commerce. Content and commerce must co-exist in a single destination where shoppers can get inspired, explore, research, share, and purchase in a collective experience. Think of how you can deliver an experience where all types of experiences (brand stories and commerce) adapt to every customer need. (Look for more on this topic coming soon). Re-imagine your reach. Look to Meeker’s findings to see how the global appetite for digital experiences is growing, but under-served in many places (i.e.: India, Mexico, Indonesia, Brazil, Philippines, etc.). Growing your online business to a new geography doesn’t have to mean starting from scratch or having an entirely new team manage the new endeavor. Expand using what you’ve already built in a multisite framework, with global language support. And of course, make sure it’s optimized for mobile! Re-imagine the possible. After every Meeker report, I’m always left with the thought “we are just at the beginning.” Everyday there is more data, more possibilities, more online consumers, and more opportunities to use new latest technology to get closer to your customers and be more successful. There’s a lot going on in our Product Development and Product Innovations groups to automate innovation for our customers, so that they can continue to stay ahead of these trends, without disrupting their business. Check out a recent interview with our Innovations Team on some of these new possibilities. Staying on track despite the seemingly endless possibilities out there is the hard part. Prioritizing where you will focus based on your unique brand promise, customer and goals is what you do best. To learn how Oracle Commerce can help your business achieve your goals check out oracle.com/commerce. Check out Meeker’s entire report here.

    Read the article

  • Psychonauts crashes right after entering load save door

    - by user67974
    Psychonauts crashes right after entering the 'Load Save' door. Here is the terminal output: Shader assembly time: 0.88 seconds Found OpenAL device: 'Simple Directmedia Layer' Found OpenAL device: 'ALSA Software' Found OpenAL device: 'OSS Software' Found OpenAL device: 'PulseAudio Software' Opened OpenAL Device: '(null)' ERROR: CAudioDrv::CAudioDrv->alGenSources reports AL_INVALID_VALUE error. PSYCHONAUTS UNIX FILENAME: corrected 'workresource/sounds/commonfx.isb' to 'WorkResource/Sounds/commonfx.isb' PSYCHONAUTS UNIX FILENAME: corrected 'workresource/sounds/commonvoice.isb' to 'WorkResource/Sounds/commonvoice.isb' PSYCHONAUTS UNIX FILENAME: corrected 'workresource/sounds/commonmusic.isb' to 'WorkResource/Sounds/commonmusic.isb' PSYCHONAUTS UNIX FILENAME: corrected 'workresource/sounds/commonmentalfx.isb' to 'WorkResource/Sounds/commonmentalfx.isb' PSYCHONAUTS UNIX FILENAME: corrected 'workresource/sounds/commonmenfxmem.isb' to 'WorkResource/Sounds/commonmenfxmem.isb' PSYCHONAUTS UNIX FILENAME: corrected 'workresource/sounds/commonfxmem.isb' to 'WorkResource/Sounds/commonfxmem.isb' GameApp::StartUp InitSoundFiles() completed in 0.15 seconds GameApp::StartUp Load some common textures completed in 0.00 seconds WARN: ENGINE: Lua garbage collection starting FreeUnusedBlocksInBuckets released 0 Kb GameApp::StartUp InitEntities() completed in 0.02 seconds PSYCHONAUTS UNIX FILENAME: corrected 'WorkResource/SavedGames/savegameprefs.ini' to 'WorkResource/SAVEDGAMES/savegameprefs.ini' PSYCHONAUTS UNIX FILENAME: corrected 'WorkResource/SavedGames/savegameprefs.ini' to 'WorkResource/SAVEDGAMES/savegameprefs.ini' GameApp::StartUp m_pSaveLoadInterface->Startup() completed in 0.00 seconds GameApp::StartUp m_UserInterface.Setup() completed in 0.00 seconds STUBBED: multisample at EDisplayOptionsWidget (/home/icculus/projects/psychonauts/Source/game/luatest/Game/UIPCDisplayOptions.cpp:97) STUBBED: VK_* at CheckVirtualKey (/home/icculus/projects/psychonauts/Source/CommonLibs/DirectX/SDLInput.cpp:1443) Game: Engine Running hook startup Game: Engine -> SetupGlobalObjects Game: Engine -> SetupLevelMenu Game: Engine -> InitMath GameApp::StartUp InitLua2() completed in 0.00 seconds GameApp::StartUp SetupLevelMenu() completed in 0.00 seconds STUBBED: do we even use this? at InitSocket (/home/icculus/projects/psychonauts/Source/game/luatest/Game/Gameplaylogger.cpp:210) GameApp::StartUp Post-Install total completed in 0.20 seconds Start Up completed in 1.57 seconds UnixMain: StartUp successful.. Working directory: /opt/psychonauts STUBBED: dispatch SDL events at PCMainHandleAnyWindowsMessages (/home/icculus/projects/psychonauts/Source/game/luatest/UnixMain.cpp:56) STUBBED: write me at GetJoystickInput (/home/icculus/projects/psychonauts/Source/CommonLibs/DirectX/SDLInput.cpp:428) STUBBED: write me at GetJoystickActionValue (/home/icculus/projects/psychonauts/Source/CommonLibs/DirectX/SDLInput.cpp:613) PSYCHONAUTS UNIX FILENAME: corrected 'workresource/cutScenes/prerendered/dflogo.bik' to 'WorkResource/cutscenes/prerendered/DFLogo.bik' Prerender subtitle file: workresource\cutScenes\prerendered\dflogo.dfs not found PSYCHONAUTS UNIX FILENAME: corrected 'workresource/cutScenes/prerendered/dflogo.bik' to 'WorkResource/cutscenes/prerendered/DFLogo.bik' STUBBED: fixed function pipeline? at setColorOp (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Texture.cpp:2097) STUBBED: fixed function pipeline? at setColorArg1 (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Texture.cpp:2106) STUBBED: fixed function pipeline? at setColorArg2 (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Texture.cpp:2115) STUBBED: fixed function pipeline? at setAlphaOp (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Texture.cpp:2124) STUBBED: fixed function pipeline? at setAlphaArg1 (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Texture.cpp:2133) STUBBED: fixed function pipeline? at setAlphaArg2 (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Texture.cpp:2142) STUBBED: fixed function pipeline? at setProjected (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Texture.cpp:2223) LOC WARN: Could not open Localization file 'Localization/English/_StringTable.lub' STUBBED: memory status at UpdateMemoryTracking (/home/icculus/projects/psychonauts/Source/game/luatest/Game/GameApp.cpp:4884) WARN: Couldn't resize array to 128; out-of-bounds elements are still in use: Vertex Pool, 188 Loading new level 'STMU' STUBBED: Need multithreaded GL at DisplayLoadingScreen (/home/icculus/projects/psychonauts/Source/game/luatest/Game/LoadingScreen.cpp:83) ========================= Memory post unload level ========================= ========================= LOC WARN: Could not open Localization file 'Localization/English/ST_StringTable.lub' DaveD: Info: Texture pack file contains 137 textures Doing a texture readback for locking! Game: Engine Saved[GLOBAL]: InstaHintFord_HostileRecord = [table] Game: Engine Saved[GLOBAL]: InstaHintFord_HostileOrder = [table] WARN: Redundant packfile read: anims\thought_bubble\bubblefirestarting.jan WARN: Redundant packfile read: anims\thought_bubble\bubbleintothemind.jan WARN: Redundant packfile read: anims\thought_bubble\bubbleinvisibility.jan WARN: Redundant packfile read: anims\thought_bubble\bubblepopperfill.jan WARN: Redundant packfile read: anims\thought_bubble\bubbletelekinesis.jan Initializing level script (if there is one) PSYCHONAUTS UNIX FILENAME: corrected 'workresource/sounds/stfx.isb' to 'WorkResource/Sounds/stfx.isb' Game: Engine Reloading goals: Game: Engine Saved[GLOBAL]: NextEncouragement = '/GLZF014TO/ 10' Game: Engine Saved[GLOBAL]: bUsedSalts = 0 Game: Engine Saved[GLOBAL]: bSTEntered = 1 Game: Engine Saved[GLOBAL]: memoriesST = 1 Game: Engine Saved[GLOBAL]: PsiBallColor = 'red' Game: Engine Saved[ST]: lastSubLevel = 'STMU' Game: Engine LOADING LEVEL st.STMU Game: Engine Saved[CA]: CALevelState = 1 Game: Engine Cutscene progression: CS Script moving from state nil to state nil, resultant state nil. Time: 0.124746672809124. * Stack Trace 1: (null) (line -1, file '(none)) () 2: SpawnScript (line -1, file 'C) (global) 3: onBeginLevel (line -1, file '(none)) (field) 4: (null) (line -1, file '(none)) () WARN: Cannot call GetDirectoryListing when running from the DVD Game: Engine Raz spawning at DartStart startpoint VM : LevelScript could not find script 'doorrimlight1' * Stack Trace 1: (null) (line -1, file '(none)) () WARN: (none(-1) SetEntityAlpha LevelScript: NULL script object passed Game: Engine Saved[GLOBAL]: bLoadedFromMainMenu = 1 Game: Engine Saved[GLOBAL]: NextEncouragement = '/GLZF014TO/ 10' Game: Engine Saved[GLOBAL]: NeedRankIncrement = 0 STUBBED: Need multithreaded GL at HideLoadingScreen (/home/icculus/projects/psychonauts/Source/game/luatest/Game/LoadingScreen.cpp:110) WARN: ENGINE: Lua garbage collection starting FreeUnusedBlocksInBuckets released 0 Kb Game: Engine Saved[GLOBAL]: SplineFigmentTVSizex = 4.51434326171875 Game: Engine Saved[GLOBAL]: SplineFigmentTVSizey = 46.38104248046875 Game: Engine Saved[GLOBAL]: SplineFigmentTVSizez = 47.08810424804688 WARN: (none(-1) SetNewAction LevelScript: no string passed ====================== Asset load progression ====================== Initial: 2.518 MB Vertex, 8.688 MB Texture Level : 3.719 MB Vertex, 22.535 MB Texture Scripts: 3.747 MB Vertex, 22.848 MB Texture ====================== ====================== Memory post level load ====================== ====================== WARN: ENGINE: Lua garbage collection starting FreeUnusedBlocksInBuckets released 0 Kb DaveD: Level loaded in 0.14 seconds Anim: anims\objects\tk_arrow_idle.jan: loaded (1 frames latency) Anim: anims\dartnew\helmet\darthelmetdn.jan: loaded (1 frames latency) Anim: anims\thought_bubble\shieldloop.jan: loaded (1 frames latency) Anim: anims\dartnew\standready.jan: loaded (1 frames latency) Anim: anims\dartnew\walkmove.jan: loaded (1 frames latency) Anim: anims\janitor\hint_end.jan: loaded (1 frames latency) Anim: anims\thought_bubble\ballstatic.jan: loaded (1 frames latency) Anim: anims\dartnew\actionfall.jan: loaded (1 frames latency) Anim: anims\dartnew\standstill.jan: loaded (1 frames latency) Anim: anims\dartnew\pack\packbounce_lf_rt.jan: loaded (1 frames latency) Anim: anims\dartnew\pack\packbounce_up_dn.jan: loaded (1 frames latency) Anim: anims\dartnew\helmet\darthelmetdefpose.jan: loaded (1 frames latency) 1: 1 (number) 1: 1 (number) STUBBED: This is probably wrong at GetDt (/home/icculus/projects/psychonauts/Source/CommonLibs/DFUtil/Profiler.cpp:181) STUBBED: set specular highlights at setSpecularEnable (/home/icculus/projects/psychonauts/Source/CommonLibs/DFGraphics/Renderer.cpp:2035) Anim: anims\dartnew\trnrtcycle.jan: loaded (1 frames latency) Anim: anims\dartnew\run.jan: loaded (1 frames latency) Anim: anims\dartnew\walk.jan: loaded (1 frames latency) Anim: anims\thought_bubble\bubbledoublejump.jan: loaded (1 frames latency) Anim: anims\dartnew\longjump.jan: loaded (1 frames latency) Anim: anims\menubrain\door1crack.jan: loaded (1 frames latency) Anim: anims\menubrain\door1crackedidle.jan: loaded (1 frames latency) Anim: anims\menubrain\door1closedidle.jan: loaded (1 frames latency) Anim: anims\dartnew\180.jan: loaded (1 frames latency) Anim: anims\menubrain\door3crack.jan: loaded (1 frames latency) Anim: anims\menubrain\door3crackedidle.jan: loaded (1 frames latency) Anim: anims\menubrain\door3closedidle.jan: loaded (1 frames latency) Anim: anims\dartnew\railslide45angle.jan: loaded (1 frames latency) Anim: anims\dartnew\railslideflat.jan: loaded (1 frames latency) Anim: anims\dartnew\trnlfcycle.jan: loaded (1 frames latency) WARN: (none(-1) SetNewAction LevelScript: no string passed Anim: anims\dartnew\mainmenu_jump.jan: loaded (1 frames latency) Anim: anims\menubrain\door1open.jan: loaded (1 frames latency) ERROR: Assert in /home/icculus/projects/psychonauts/Source/game/luatest/../../CommonLibs/Include/../DFGraphics/Color.h, line 96 v.x >= 0.0f && v.x <= 1.0f && v.y >= 0.0f && v.y <= 1.0f && v.z >= 0.0f && v.z <= 1.0f && v.w >= 0.0f && v.w <= 1.0f Encountered Error: Psychonauts has encountered an error /home/icculus/projects/psychonauts/Source/game/luatest/../../CommonLibs/Include/../DFGraphics/Color.h, line 96 v.x >= 0.0f && v.x <= 1.0f && v.y >= 0.0f && v.y <= 1.0f && v.z >= 0.0f && v.z <= 1.0f && v.w >= 0.0f && v.w <= 1.0f Please contact technical support at http://www.doublefine.com. I am currently using Bumblebee for hybrid graphics, if that helps in any way.

    Read the article

  • Listing common SQL Code Smells.

    - by Phil Factor
    Once you’ve done a number of SQL Code-reviews, you’ll know those signs in the code that all might not be well. These ’Code Smells’ are coding styles that don’t directly cause a bug, but are indicators that all is not well with the code. . Kent Beck and Massimo Arnoldi seem to have coined the phrase in the "OnceAndOnlyOnce" page of www.C2.com, where Kent also said that code "wants to be simple". Bad Smells in Code was an essay by Kent Beck and Martin Fowler, published as Chapter 3 of the book ‘Refactoring: Improving the Design of Existing Code’ (ISBN 978-0201485677) Although there are generic code-smells, SQL has its own particular coding habits that will alert the programmer to the need to re-factor what has been written. See Exploring Smelly Code   and Code Deodorants for Code Smells by Nick Harrison for a grounding in Code Smells in C# I’ve always been tempted by the idea of automating a preliminary code-review for SQL. It would be so useful to trawl through code and pick up the various problems, much like the classic ‘Lint’ did for C, and how the Code Metrics plug-in for .NET Reflector by Jonathan 'Peli' de Halleux is used for finding Code Smells in .NET code. The problem is that few of the standard procedural code smells are relevant to SQL, and we need an agreed list of code smells. Merrilll Aldrich made a grand start last year in his blog Top 10 T-SQL Code Smells.However, I'd like to make a start by discovering if there is a general opinion amongst Database developers what the most important SQL Smells are. One can be a bit defensive about code smells. I will cheerfully write very long stored procedures, even though they are frowned on. I’ll use dynamic SQL occasionally. You can only use them as an aid for your own judgment and it is fine to ‘sign them off’ as being appropriate in particular circumstances. Also, whole classes of ‘code smells’ may be irrelevant for a particular database. The use of proprietary SQL, for example, is only a ‘code smell’ if there is a chance that the database will have to be ported to another RDBMS. The use of dynamic SQL is a risk only with certain security models. As the saying goes,  a CodeSmell is a hint of possible bad practice to a pragmatist, but a sure sign of bad practice to a purist. Plamen Ratchev’s wonderful article Ten Common SQL Programming Mistakes lists some of these ‘code smells’ along with out-and-out mistakes, but there are more. The use of nested transactions, for example, isn’t entirely incorrect, even though the database engine ignores all but the outermost: but it does flag up the possibility that the programmer thinks that nested transactions are supported. If anything requires some sort of general agreement, the definition of code smells is one. I’m therefore going to make this Blog ‘dynamic, in that, if anyone twitters a suggestion with a #SQLCodeSmells tag (or sends me a twitter) I’ll update the list here. If you add a comment to the blog with a suggestion of what should be added or removed, I’ll do my best to oblige. In other words, I’ll try to keep this blog up to date. The name against each 'smell' is the name of the person who Twittered me, commented about or who has written about the 'smell'. it does not imply that they were the first ever to think of the smell! Use of deprecated syntax such as *= (Dave Howard) Denormalisation that requires the shredding of the contents of columns. (Merrill Aldrich) Contrived interfaces Use of deprecated datatypes such as TEXT/NTEXT (Dave Howard) Datatype mis-matches in predicates that rely on implicit conversion.(Plamen Ratchev) Using Correlated subqueries instead of a join   (Dave_Levy/ Plamen Ratchev) The use of Hints in queries, especially NOLOCK (Dave Howard /Mike Reigler) Few or No comments. Use of functions in a WHERE clause. (Anil Das) Overuse of scalar UDFs (Dave Howard, Plamen Ratchev) Excessive ‘overloading’ of routines. The use of Exec xp_cmdShell (Merrill Aldrich) Excessive use of brackets. (Dave Levy) Lack of the use of a semicolon to terminate statements Use of non-SARGable functions on indexed columns in predicates (Plamen Ratchev) Duplicated code, or strikingly similar code. Misuse of SELECT * (Plamen Ratchev) Overuse of Cursors (Everyone. Special mention to Dave Levy & Adrian Hills) Overuse of CLR routines when not necessary (Sam Stange) Same column name in different tables with different datatypes. (Ian Stirk) Use of ‘broken’ functions such as ‘ISNUMERIC’ without additional checks. Excessive use of the WHILE loop (Merrill Aldrich) INSERT ... EXEC (Merrill Aldrich) The use of stored procedures where a view is sufficient (Merrill Aldrich) Not using two-part object names (Merrill Aldrich) Using INSERT INTO without specifying the columns and their order (Merrill Aldrich) Full outer joins even when they are not needed. (Plamen Ratchev) Huge stored procedures (hundreds/thousands of lines). Stored procedures that can produce different columns, or order of columns in their results, depending on the inputs. Code that is never used. Complex and nested conditionals WHILE (not done) loops without an error exit. Variable name same as the Datatype Vague identifiers. Storing complex data  or list in a character map, bitmap or XML field User procedures with sp_ prefix (Aaron Bertrand)Views that reference views that reference views that reference views (Aaron Bertrand) Inappropriate use of sql_variant (Neil Hambly) Errors with identity scope using SCOPE_IDENTITY @@IDENTITY or IDENT_CURRENT (Neil Hambly, Aaron Bertrand) Schemas that involve multiple dated copies of the same table instead of partitions (Matt Whitfield-Atlantis UK) Scalar UDFs that do data lookups (poor man's join) (Matt Whitfield-Atlantis UK) Code that allows SQL Injection (Mladen Prajdic) Tables without clustered indexes (Matt Whitfield-Atlantis UK) Use of "SELECT DISTINCT" to mask a join problem (Nick Harrison) Multiple stored procedures with nearly identical implementation. (Nick Harrison) Excessive column aliasing may point to a problem or it could be a mapping implementation. (Nick Harrison) Joining "too many" tables in a query. (Nick Harrison) Stored procedure returning more than one record set. (Nick Harrison) A NOT LIKE condition (Nick Harrison) excessive "OR" conditions. (Nick Harrison) User procedures with sp_ prefix (Aaron Bertrand) Views that reference views that reference views that reference views (Aaron Bertrand) sp_OACreate or anything related to it (Bill Fellows) Prefixing names with tbl_, vw_, fn_, and usp_ ('tibbling') (Jeremiah Peschka) Aliases that go a,b,c,d,e... (Dave Levy/Diane McNurlan) Overweight Queries (e.g. 4 inner joins, 8 left joins, 4 derived tables, 10 subqueries, 8 clustered GUIDs, 2 UDFs, 6 case statements = 1 query) (Robert L Davis) Order by 3,2 (Dave Levy) MultiStatement Table functions which are then filtered 'Sel * from Udf() where Udf.Col = Something' (Dave Ballantyne) running a SQL 2008 system in SQL 2000 compatibility mode(John Stafford)

    Read the article

  • How to Upload a file from client to server using OFBIZ?

    - by SIVAKUMAR.J
    I'm new to ofbiz so try to keep your answer as simple as possibly. If you can give examples that would be kind. My problem is I created a project inside the ofbiz/hot-deploy folder namely productionmgntSystem. Inside the folder ofbiz\hot-deploy\productionmgntSystem\webapp\productionmgntSystem I created a file app_details_1.ftl. The following are the code of this file <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <title>Insert title here</title> <script TYPE="TEXT/JAVASCRIPT" language=""JAVASCRIPT"> function uploadFile() { //alert("Before calling upload.jsp"); window.location='<@ofbizUrl>testing_service1</@ofbizUrl>' } </script> </head> <!-- <form action="<@ofbizUrl>testing_service1</@ofbizUrl>" enctype="multipart/form-data" name="app_details_frm"> --> <form action="<@ofbizUrl>logout1</@ofbizUrl>" enctype="multipart/form-data" name="app_details_frm"> <center style="height: 299px; "> <table border="0" style="height: 177px; width: 788px"> <tr style="height: 115px; "> <td style="width: 103px; "> <td style="width: 413px; "><h1>APPLICATION DETAILS</h1> <td style="width: 55px; "> </tr> <tr> <td style="width: 125px; ">Application name : </td> <td> <input name="app_name_txt" id="txt_1" value=" " /> </td> </tr> <tr> <td style="width: 125px; ">Excell sheet &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;: </td> <td> <input type="file" name="filename"/> </td> </tr> <tr> <td> <!-- <input type="button" name="logout1_cmd" value="Logout" onclick="logout1()"/> --> <input type="submit" name="logout_cmd" value="logout"/> </td> <td> <!-- <input type="submit" name="upload_cmd" value="Submit" /> --> <input type="button" name="upload1_cmd" value="Upload" onclick="uploadFile()"/> </td> </tr> </table> </center> </form> </html> the following coding is present in the file ofbiz\hot-deploy\productionmgntSystem\webapp\productionmgntSystem\WEB-INF\controller.xml ...... ....... ........ <request-map uri="testing_service1"> <security https="true" auth="true"/> <event type="java" path="org.ofbiz.productionmgntSystem.web_app_req.WebServices1" invoke="testingService"/> <response name="ok" type="view" value="ok_view"/> <response name="exception" type="view" value="exception_view"/> </request-map> .......... ............ .......... <view-map name="ok_view" type="ftl" page="ok_view.ftl"/> <view-map name="exception_view" type="ftl" page="exception_view.ftl"/> ................ ............. ............. The following are the coding present in the file ofbiz\hot-deploy\productionmgntSystem\src\org\ofbiz\productionmgntSystem\web_app_req\WebServices1.java package org.ofbiz.productionmgntSystem.web_app_req; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import java.io.DataInputStream; import java.io.FileOutputStream; import java.io.IOException; public class WebServices1 { public static String testingService(HttpServletRequest request, HttpServletResponse response) { //int i=0; String result="ok"; System.out.println("\n\n\t*************************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response)- Start"); String contentType=request.getContentType(); System.out.println("\n\n\t*************************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response)- contentType : "+contentType); String str=new String(); // response.setContentType("text/html"); //PrintWriter writer; if ((contentType != null) && (contentType.indexOf("multipart/form-data") >= 0)) { System.out.println("\n\n\t**********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) after if (contentType != null)"); try { // writer=response.getWriter(); System.out.println("\n\n\t**********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) - try Start"); DataInputStream in = new DataInputStream(request.getInputStream()); int formDataLength = request.getContentLength(); byte dataBytes[] = new byte[formDataLength]; int byteRead = 0; int totalBytesRead = 0; //this loop converting the uploaded file into byte code while (totalBytesRead < formDataLength) { byteRead = in.read(dataBytes, totalBytesRead,formDataLength); totalBytesRead += byteRead; } String file = new String(dataBytes); //for saving the file name String saveFile = file.substring(file.indexOf("filename=\"") + 10); saveFile = saveFile.substring(0, saveFile.indexOf("\n")); saveFile = saveFile.substring(saveFile.lastIndexOf("\\")+ 1,saveFile.indexOf("\"")); int lastIndex = contentType.lastIndexOf("="); String boundary = contentType.substring(lastIndex + 1,contentType.length()); int pos; //extracting the index of file pos = file.indexOf("filename=\""); pos = file.indexOf("\n", pos) + 1; pos = file.indexOf("\n", pos) + 1; pos = file.indexOf("\n", pos) + 1; int boundaryLocation = file.indexOf(boundary, pos) - 4; int startPos = ((file.substring(0, pos)).getBytes()).length; int endPos = ((file.substring(0, boundaryLocation)).getBytes()).length; //creating a new file with the same name and writing the content in new file FileOutputStream fileOut = new FileOutputStream("/"+saveFile); fileOut.write(dataBytes, startPos, (endPos - startPos)); fileOut.flush(); fileOut.close(); System.out.println("\n\n\t**********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) - try End"); } catch(IOException ioe) { System.out.println("\n\n\t*********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) - Catch IOException"); //ioe.printStackTrace(); return("exception"); } catch(Exception ex) { System.out.println("\n\n\t*********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) - Catch Exception"); return("exception"); } } else { System.out.println("\n\n\t********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) else part"); result="exception"; } System.out.println("\n\n\t*************************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response)- End"); return(result); } } I want to upload a file to the server. The file is get from user " tag in the "app_details_1.ftl" file & it is updated into the server by using the method "testingService(HttpServletRequest request, HttpServletResponse response)" in the class "WebServices1". But the file is not uploaded. Give me a good solution for uploading a file to the server.

    Read the article

  • SQL SERVER – Step by Step Guide to Beginning Data Quality Services in SQL Server 2012 – Introduction to DQS

    - by pinaldave
    Data Quality Services is a very important concept of SQL Server. I have recently started to explore the same and I am really learning some good concepts. Here are two very important blog posts which one should go over before continuing this blog post. Installing Data Quality Services (DQS) on SQL Server 2012 Connecting Error to Data Quality Services (DQS) on SQL Server 2012 This article is introduction to Data Quality Services for beginners. We will be using an Excel file Click on the image to enlarge the it. In the first article we learned to install DQS. In this article we will see how we can learn about building Knowledge Base and using it to help us identify the quality of the data as well help correct the bad quality of the data. Here are the two very important steps we will be learning in this tutorial. Building a New Knowledge Base  Creating a New Data Quality Project Let us start the building the Knowledge Base. Click on New Knowledge Base. In our project we will be using the Excel as a knowledge base. Here is the Excel which we will be using. There are two columns. One is Colors and another is Shade. They are independent columns and not related to each other. The point which I am trying to show is that in Column A there are unique data and in Column B there are duplicate records. Clicking on New Knowledge Base will bring up the following screen. Enter the name of the new knowledge base. Clicking NEXT will bring up following screen where it will allow to select the EXCE file and it will also let users select the source column. I have selected Colors and Shade both as a source column. Creating a domain is very important. Here you can create a unique domain or domain which is compositely build from Colors and Shade. As this is the first example, I will create unique domain – for Colors I will create domain Colors and for Shade I will create domain Shade. Here is the screen which will demonstrate how the screen will look after creating domains. Clicking NEXT it will bring you to following screen where you can do the data discovery. Clicking on the START will start the processing of the source data provided. Pre-processed data will show various information related to the source data. In our case it shows that Colors column have unique data whereas Shade have non-unique data and unique data rows are only two. In the next screen you can actually add more rows as well see the frequency of the data as the values are listed unique. Clicking next will publish the knowledge base which is just created. Now the knowledge base is created. We will try to take any random data and attempt to do DQS implementation over it. I am using another excel sheet here for simplicity purpose. In reality you can easily use SQL Server table for the same. Click on New Data Quality Project to see start DQS Project. In the next screen it will ask which knowledge base to use. We will be using our Colors knowledge base which we have recently created. In the Colors knowledge base we had two columns – 1) Colors and 2) Shade. In our case we will be using both of the mappings here. User can select one or multiple column mapping over here. Now the most important phase of the complete project. Click on Start and it will make the cleaning process and shows various results. In our case there were two columns to be processed and it completed the task with necessary information. It demonstrated that in Colors columns it has not corrected any value by itself but in Shade value there is a suggestion it has. We can train the DQS to correct values but let us keep that subject for future blog posts. Now click next and keep the domain Colors selected left side. It will demonstrate that there are two incorrect columns which it needs to be corrected. Here is the place where once corrected value will be auto-corrected in future. I manually corrected the value here and clicked on Approve radio buttons. As soon as I click on Approve buttons the rows will be disappeared from this tab and will move to Corrected Tab. If I had rejected tab it would have moved the rows to Invalid tab as well. In this screen you can see how the corrected 2 rows are demonstrated. You can click on Correct tab and see previously validated 6 rows which passed the DQS process. Now let us click on the Shade domain on the left side of the screen. This domain shows very interesting details as there DQS system guessed the correct answer as Dark with the confidence level of 77%. It is quite a high confidence level and manual observation also demonstrate that Dark is the correct answer. I clicked on Approve and the row moved to corrected tab. On the next screen DQS shows the summary of all the activities. It also demonstrates how the correction of the quality of the data was performed. The user can explore their data to a SQL Server Table, CSV file or Excel. The user also has an option to either explore data and all the associated cleansing info or data only. I will select Data only for demonstration purpose. Clicking explore will generate the files. Let us open the generated file. It will look as following and it looks pretty complete and corrected. Well, we have successfully completed DQS Process. The process is indeed very easy. I suggest you try this out yourself and you will find it very easy to learn. In future we will go over advanced concepts. Are you using this feature on your production server? If yes, would you please leave a comment with your environment and business need. It will be indeed interesting to see where it is implemented. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Data Warehousing, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Data Quality Services, DQS

    Read the article

  • SQL SERVER – Guest Post – Architecting Data Warehouse – Niraj Bhatt

    - by pinaldave
    Niraj Bhatt works as an Enterprise Architect for a Fortune 500 company and has an innate passion for building / studying software systems. He is a top rated speaker at various technical forums including Tech·Ed, MCT Summit, Developer Summit, and Virtual Tech Days, among others. Having run a successful startup for four years Niraj enjoys working on – IT innovations that can impact an enterprise bottom line, streamlining IT budgets through IT consolidation, architecture and integration of systems, performance tuning, and review of enterprise applications. He has received Microsoft MVP award for ASP.NET, Connected Systems and most recently on Windows Azure. When he is away from his laptop, you will find him taking deep dives in automobiles, pottery, rafting, photography, cooking and financial statements though not necessarily in that order. He is also a manager/speaker at BDOTNET, Asia’s largest .NET user group. Here is the guest post by Niraj Bhatt. As data in your applications grows it’s the database that usually becomes a bottleneck. It’s hard to scale a relational DB and the preferred approach for large scale applications is to create separate databases for writes and reads. These databases are referred as transactional database and reporting database. Though there are tools / techniques which can allow you to create snapshot of your transactional database for reporting purpose, sometimes they don’t quite fit the reporting requirements of an enterprise. These requirements typically are data analytics, effective schema (for an Information worker to self-service herself), historical data, better performance (flat data, no joins) etc. This is where a need for data warehouse or an OLAP system arises. A Key point to remember is a data warehouse is mostly a relational database. It’s built on top of same concepts like Tables, Rows, Columns, Primary keys, Foreign Keys, etc. Before we talk about how data warehouses are typically structured let’s understand key components that can create a data flow between OLTP systems and OLAP systems. There are 3 major areas to it: a) OLTP system should be capable of tracking its changes as all these changes should go back to data warehouse for historical recording. For e.g. if an OLTP transaction moves a customer from silver to gold category, OLTP system needs to ensure that this change is tracked and send to data warehouse for reporting purpose. A report in context could be how many customers divided by geographies moved from sliver to gold category. In data warehouse terminology this process is called Change Data Capture. There are quite a few systems that leverage database triggers to move these changes to corresponding tracking tables. There are also out of box features provided by some databases e.g. SQL Server 2008 offers Change Data Capture and Change Tracking for addressing such requirements. b) After we make the OLTP system capable of tracking its changes we need to provision a batch process that can run periodically and takes these changes from OLTP system and dump them into data warehouse. There are many tools out there that can help you fill this gap – SQL Server Integration Services happens to be one of them. c) So we have an OLTP system that knows how to track its changes, we have jobs that run periodically to move these changes to warehouse. The question though remains is how warehouse will record these changes? This structural change in data warehouse arena is often covered under something called Slowly Changing Dimension (SCD). While we will talk about dimensions in a while, SCD can be applied to pure relational tables too. SCD enables a database structure to capture historical data. This would create multiple records for a given entity in relational database and data warehouses prefer having their own primary key, often known as surrogate key. As I mentioned a data warehouse is just a relational database but industry often attributes a specific schema style to data warehouses. These styles are Star Schema or Snowflake Schema. The motivation behind these styles is to create a flat database structure (as opposed to normalized one), which is easy to understand / use, easy to query and easy to slice / dice. Star schema is a database structure made up of dimensions and facts. Facts are generally the numbers (sales, quantity, etc.) that you want to slice and dice. Fact tables have these numbers and have references (foreign keys) to set of tables that provide context around those facts. E.g. if you have recorded 10,000 USD as sales that number would go in a sales fact table and could have foreign keys attached to it that refers to the sales agent responsible for sale and to time table which contains the dates between which that sale was made. These agent and time tables are called dimensions which provide context to the numbers stored in fact tables. This schema structure of fact being at center surrounded by dimensions is called Star schema. A similar structure with difference of dimension tables being normalized is called a Snowflake schema. This relational structure of facts and dimensions serves as an input for another analysis structure called Cube. Though physically Cube is a special structure supported by commercial databases like SQL Server Analysis Services, logically it’s a multidimensional structure where dimensions define the sides of cube and facts define the content. Facts are often called as Measures inside a cube. Dimensions often tend to form a hierarchy. E.g. Product may be broken into categories and categories in turn to individual items. Category and Items are often referred as Levels and their constituents as Members with their overall structure called as Hierarchy. Measures are rolled up as per dimensional hierarchy. These rolled up measures are called Aggregates. Now this may seem like an overwhelming vocabulary to deal with but don’t worry it will sink in as you start working with Cubes and others. Let’s see few other terms that we would run into while talking about data warehouses. ODS or an Operational Data Store is a frequently misused term. There would be few users in your organization that want to report on most current data and can’t afford to miss a single transaction for their report. Then there is another set of users that typically don’t care how current the data is. Mostly senior level executives who are interesting in trending, mining, forecasting, strategizing, etc. don’t care for that one specific transaction. This is where an ODS can come in handy. ODS can use the same star schema and the OLAP cubes we saw earlier. The only difference is that the data inside an ODS would be short lived, i.e. for few months and ODS would sync with OLTP system every few minutes. Data warehouse can periodically sync with ODS either daily or weekly depending on business drivers. Data marts are another frequently talked about topic in data warehousing. They are subject-specific data warehouse. Data warehouses that try to span over an enterprise are normally too big to scope, build, manage, track, etc. Hence they are often scaled down to something called Data mart that supports a specific segment of business like sales, marketing, or support. Data marts too, are often designed using star schema model discussed earlier. Industry is divided when it comes to use of data marts. Some experts prefer having data marts along with a central data warehouse. Data warehouse here acts as information staging and distribution hub with spokes being data marts connected via data feeds serving summarized data. Others eliminate the need for a centralized data warehouse citing that most users want to report on detailed data. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Business Intelligence, Data Warehousing, Database, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 680 681 682 683 684 685 686 687 688 689 690 691  | Next Page >