Search Results

Search found 36762 results on 1471 pages for 'common table expression'.

Page 342/1471 | < Previous Page | 338 339 340 341 342 343 344 345 346 347 348 349  | Next Page >

  • route http and ssh traffic normally, everything else via vpn tunnel

    - by Normadize
    I've read quite a bit and am close, I feel, and I'm pulling my hair out ... please help! I have an OpenVPN cliend whose server sets local routes and also changes the default gw (I know I can prevent that with --route-nopull). I'd like to have all outgoing http and ssh traffic via the local gw, and everything else via the vpn. Local IP is 192.168.1.6/24, gw 192.168.1.1. OpenVPN local IP is 10.102.1.6/32, gw 192.168.1.5 OpenVPN server is at {OPENVPN_SERVER_IP} Here's the route table after openvpn connection: # ip route show table main 0.0.0.0/1 via 10.102.1.5 dev tun0 default via 192.168.1.1 dev eth0 proto static 10.102.1.1 via 10.102.1.5 dev tun0 10.102.1.5 dev tun0 proto kernel scope link src 10.102.1.6 {OPENVPN_SERVER_IP} via 192.168.1.1 dev eth0 128.0.0.0/1 via 10.102.1.5 dev tun0 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.6 metric 1 This makes all packets go via to the VPN tunnel except those destined for 192.168.1.0/24. Doing wget -qO- http://echoip.org shows the vpn server's address, as expected, the packets have 10.102.1.6 as source address (the vpn local ip), and are routed via tun0 ... as reported by tcpdump -i tun0 (tcpdump -i eth0 sees none of this traffic). What I tried was: create a 2nd routing table holding the 192.168.1.6/24 routing info (copied from the main table above) add an iptables -t mangle -I PREROUTING rule to mark packets destined for port 80 add an ip rule to match on the mangled packet and point it to the 2nd routing table add an ip rule for to 192.168.1.6 and from 192.168.1.6 to point to the 2nd routing table (though this is superfluous) changed the ipv4 filter validation to none in net.ipv4.conf.tun0.rp_filter=0 and net.ipv4.conf.eth0.rp_filter=0 I also tried an iptables mangle output rule, iptables nat prerouting rule. It still fails and I'm not sure what I'm missing: iptables mangle prerouting: packet still goes via vpn iptables mangle output: packet times out Is it not the case that to achieve what I want, then when doing wget http://echoip.org I should change the packet's source address to 192.168.1.6 before routing it off? But if I do that, the response from the http server would be routed back to 192.168.1.6 and wget would not see it as it is still bound to tun0 (the vpn interface)? Can a kind soul please help? What commands would you execute after the openvpn connects to achieve what I want? Looking forward to hair regrowth ...

    Read the article

  • Storing User-uploaded Images

    - by Nyxynyx
    What is the usual practice for handling user uploaded photos and storing them on the database and server? For a user profile image: After receiving the image file from user, rename file to <image_id>_<username> Move image to /images/userprofile Add img filename to a table users containing their profile details like first_name, last_name, age, gender, birthday For a image for a review done by user: After receiving the image file from user, rename file to <image_id>_<review_id> Move image to /images/reviews Add img filename to a table reviews containing their profile details like review_id, review_content, user_id, score. Question 1: How should I go about storing the image filenames if the user can upload multiple photos for a particular review? Serialize? Question 2: Or have another table review_images with columns review_id, image_id, image_filename just for tracking images? Will doing a JOIN when retriving the image_filename from this table slow down performance noticeably? Question 3: Should all the images be stored in a single folder? Will there be a problem when we have 100K photos in the same folder? Is there a more efficient way to go about doing this?

    Read the article

  • CommonFilter and CommonData solutions on CodePlex updated

    - by TATWORTH
    The CommonFilter and CommonData solutions on Codeplex have been updated post VS2010 SP1. The respective URLs are: http://commondata.codeplex.com/releases/view/62502 http://commonfilter.codeplex.com/releases/view/62499 CommonFilter is a cut-down version of CommonData containing just the filter functions. Common Data contains a vast number of useful functions for building ASP.NET web sites including: Lightweight reporting to a custome event log Filter functions for common types of data input

    Read the article

  • Parse text file on click - and then display

    - by John R
    I am thinking of a methodology for rapid retrieval of code snippets. I imagine an HTML table with a setup like this: one two ... ten one oneTwo() oneTen() two twoOne() twoTen() ... ten tenOne() tenTwo() When a user clicks a function in this HTML table, a snippet of code is shown in another div tag or perhaps a popup window (I'm open to different solutions). I want to maintain only one PHP file named utitlities.php that contains a class called 'util'. This file & class will hold all the functions referenced in the above table (it is also used on various projects and is functional code). A key idea is that I do not want to update the HTML documentation everytime I write/update a new function in utilities.php. I should be able to click a function in the table and have PHP open the utilities file, parse out the apropriate function and display it in an HTML window. Questions: 1) I will be coding this in PHP and JavaScript but am wondering if similar scripts are available (for all or part) so I don't reinvent the wheel. 2) Quick & easy Ajax suggestions appreciated too (probably will use jquery, but am rusty). 3) Methodology for parsing out the functions from the utilities.php file (I'm not to good with regex).

    Read the article

  • SQL Server: Writing CASE expressions properly when NULLs are involved

    - by Mladen Prajdic
    We’ve all written a CASE expression (yes, it’s an expression and not a statement) or two every now and then. But did you know there are actually 2 formats you can write the CASE expression in? This actually bit me when I was trying to add some new functionality to an old stored procedure. In some rare cases the stored procedure just didn’t work correctly. After a quick look it turned out to be a CASE expression problem when dealing with NULLS. In the first format we make simple “equals to” comparisons to a value: SELECT CASE <value> WHEN <equals this value> THEN <return this> WHEN <equals this value> THEN <return this> -- ... more WHEN's here ELSE <return this> END Second format is much more flexible since it allows for complex conditions. USE THIS ONE! SELECT CASE WHEN <value> <compared to> <value> THEN <return this> WHEN <value> <compared to> <value> THEN <return this> -- ... more WHEN's here ELSE <return this> END Now that we know both formats and you know which to use (the second one if that hasn’t been clear enough) here’s an example how the first format WILL make your evaluation logic WRONG. Run the following code for different values of @i. Just comment out any 2 out of 3 “SELECT @i =” statements. DECLARE @i INTSELECT  @i = -1 -- first resultSELECT  @i = 55 -- second resultSELECT  @i = NULL -- third resultSELECT @i AS OriginalValue, -- first CASE format. DON'T USE THIS! CASE @i WHEN -1 THEN '-1' WHEN NULL THEN 'We have a NULL!' ELSE 'We landed in ELSE' END AS DontUseThisCaseFormatValue, -- second CASE format. USE THIS! CASE WHEN @i = -1 THEN '-1' WHEN @i IS NULL THEN 'We have a NULL!' ELSE 'We landed in ELSE' END AS UseThisCaseFormatValue When the value of @i is –1 everything works as expected, since both formats go into the –1 WHEN branch. When the value of @i is 55 everything again works as expected, since both formats go into the ELSE branch. When the value of @i is NULL the problems become evident. The first format doesn’t go into the WHEN NULL branch because it makes an equality comparison between two NULLs. Because a NULL is an unknown value: NULL = NULL is false. That is why the first format goes into the ELSE Branch but the second format correctly handles the proper IS NULL comparison.   Please use the second more explicit format. Your future self will be very grateful to you when he doesn’t have to discover these kinds of bugs.

    Read the article

  • Laptop Display Not Working

    - by etrask
    Hello, I just purchased this laptop. It is working fine but I want to use Xubuntu on it. I managed to get 10.04 installed and running but it did not recognize/use the wireless card. So I want to try 10.10. However, every time I try, the screen goes black and never comes back on. I have tried the Live CD and Alternate install CDs, for both Ubuntu and Xubuntu 10.10. After I select "Install Ubuntu" or "Try Ubuntu", the screen blacks out and never displays anything. So I deleted "quiet" and "splash" off the command line to see what messages would come up. A bunch of text flies by but it ends at: [time] TCP established hash table entries: 524288 (order: 11, 8388608 bytes) [time] TCP bind hash table entries 65536 (order: 8, 1048576 bytes) [time] TCP: Hash tables configured (established 524288 bind 65536) [time] TCP reno registered [time] UDP hash table entries: 2048 (order: 4, 65536 bytes) [time] UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes) [time] NET: Registered protocol family 1 And freezes here forever. I have tried nomodeset, vga=771, and xforcevesa with no progress. Is my video card simply not going to work with this distro? It seems strange that 10.04 would work fine but not 10.10. Any help is appreciated, thank you.

    Read the article

  • 404 when doing safe-upgrade in lucid 64 box?

    - by Millisami
    Why I see 404 when doing sudo aptitude safe-upgrade in my lucid 64 box? deploy@li167-251:~$ sudo aptitude safe-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done The following packages will be upgraded: apache2 apache2-mpm-prefork apache2-threaded-dev apache2-utils apache2.2-bin apache2.2-common apt apt-utils base-files binutils bzip2 dpkg dpkg-dev gzip ifupdown krb5-multidev language-pack-en language-pack-en-base language-selector-common libatk1.0-0 libatk1.0-dev libavahi-client3 libavahi-common-data libavahi-common3 libbz2-1.0 libc-bin libc-dev-bin libc6 libc6-dev libc6-i686 libcups2 libfreetype6 libfreetype6-dev libglib2.0-0 libglib2.0-dev libgssapi-krb5-2 libgssrpc4 libgtk2.0-0 libgtk2.0-common libgtk2.0-dev libk5crypto3 libkadm5clnt-mit7 libkadm5srv-mit7 libkdb5-4 libkrb5-3 libkrb5-dev libkrb5support0 libldap-2.4-2 libldap2-dev libmysqlclient-dev libmysqlclient16 libnotify-dev libnotify1 libpam-modules libpam-runtime libpam0g libparted0debian1 libpng12-0 libpng12-dev libpq-dev libpq5 libssl-dev libssl0.9.8 libtiff4 libudev0 libusb-0.1-4 linux-libc-dev mountall mysql-client mysql-client-5.1 mysql-client-core-5.1 mysql-common mysql-server mysql-server-5.1 mysql-server-core-5.1 openssh-client openssh-server openssl parted python-apt sudo tzdata udev upstart ureadahead wget xulrunner-1.9.2 xulrunner-1.9.2-dev The following packages are RECOMMENDED but will NOT be installed: colibri debhelper fakeroot hicolor-icon-theme libatk1.0-data libglib2.0-data libgtk2.0-bin libhtml-template-perl manpages-dev notification-daemon notify-osd ssl-cert xauth xfce4-notifyd 88 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 85.8MB of archives. After unpacking 1712kB will be used. Do you want to continue? [Y/n/?] y Writing extended state information... Done Get:1 http://security.ubuntu.com/ubuntu/ lucid-updates/main libpam-modules 1.1.1-2ubuntu5 [358kB] Get:2 http://security.ubuntu.com/ubuntu/ lucid-updates/main base-files 5.0.0ubuntu20.10.04.2 [70.2kB] Get:3 http://security.ubuntu.com/ubuntu/ lucid-updates/main gzip 1.3.12-9ubuntu1.1 [102kB] Err http://security.ubuntu.com/ubuntu/ lucid-updates/main libc-bin 2.11.1-0ubuntu7.2 404 Not Found [IP: 91.189.88.37 80] Err http://security.ubuntu.com/ubuntu/ lucid-updates/main libc6 2.11.1-0ubuntu7.2 404 Not Found [IP: 91.189.88.37 80] Err http://security.ubuntu.com/ubuntu/ lucid-updates/main libc6-i686 2.11.1-0ubuntu7.2 .........

    Read the article

  • Corrupt mysql system tables

    - by psynnott
    I am having issues with the columns_priv table in the mysql system database. I cannot add new users currently. I have tried repairing it using mysqlcheck --auto-repair --all-databases --password but I get the following output: mysql.columns_priv Error : Incorrect file format 'columns_priv' error : Corrupt Is there any other way to repair this table, or how do I go about replacing it with a blank table? What would I lose by doing that? Thank you Edit (Additional Info) mysqld is currently using 100% cpu constantly. Looking at show processlist, I get: mysql> show processlist; +-----+------------------+-----------+-------+---------+------+-------------------+------------------------------------------------------------------------------------------------------+ | Id | User | Host | db | Command | Time | State | Info | +-----+------------------+-----------+-------+---------+------+-------------------+------------------------------------------------------------------------------------------------------+ | 5 | debian-sys-maint | localhost | mysql | Query | 1589 | Opening tables | ALTER TABLE tables_priv MODIFY Column_priv set('Select','Insert','Update','References') COLL | | | 752 | root | localhost | NULL | Query | 0 | NULL | show processlist | +-----+------------------+-----------+-------+---------+------+-------------------+------------------------------------------------------------------------------------------------------+ 2 rows in set (0.00 sec)

    Read the article

  • Does LINQ require significantly more processing cycles and memory than lower-level data iteration techniques?

    - by Matthew Patrick Cashatt
    Background I am recently in the process of enduring grueling tech interviews for positions that use the .NET stack, some of which include silly questions like this one, and some questions that are more valid. I recently came across an issue that may be valid but I want to check with the community here to be sure. When asked by an interviewer how I would count the frequency of words in a text document and rank the results, I answered that I would Use a stream object put the text file in memory as a string. Split the string into an array on spaces while ignoring punctuation. Use LINQ against the array to .GroupBy() and .Count(), then OrderBy() said count. I got this answer wrong for two reasons: Streaming an entire text file into memory could be disasterous. What if it was an entire encyclopedia? Instead I should stream one block at a time and begin building a hash table. LINQ is too expensive and requires too many processing cycles. I should have built a hash table instead and, for each iteration, only added a word to the hash table if it didn't otherwise exist and then increment it's count. The first reason seems, well, reasonable. But the second gives me more pause. I thought that one of the selling points of LINQ is that it simply abstracts away lower-level operations like hash tables but that, under the veil, it is still the same implementation. Question Aside from a few additional processing cycles to call any abstracted methods, does LINQ require significantly more processing cycles to accomplish a given data iteration task than a lower-level task (such as building a hash table) would?

    Read the article

  • How to make Windows 7 use the internet connection that I specify

    - by user138957
    I have a LAN adapter and a USB wireless internet connection. When both connected windows 7 always uses the USB. I tried changing the metric values but no luck. Let me explains the steps I took. Currently automatic metric on all adapters. LAN connected. ipconfig shows that it is connected to the correct ip/dns/gateway etc. IPv4 Route table shows Metric 24 Then connected USB. ipconfig shows USB connectivity then LAN in that order. Internet is now through USB. IPv4 Route table shows Metric 4249 for LAN and USB is 41. Gateway for USB shows "on-link". netstat -rn shows USBDEVICE on top. Changed LAN metric to 5 and now the route table shows LAN as 9 (not sure why it added 4) and USB as 41. netstat shows LAN then USB. ipconfig shows LAN then USB. But still connection is through USB. How do I know? Task manager shows utilization only through USB as well as speed is showing around 1mbps rather than LANs 10mbps. How can I get win7 use LAN while USB is connected. I am just trying to use USB as a backup just in case I lose LAN connection. Please help!! I thought i will make USB metric manually to say 10. But it says I have to reconnect for it to be effective. Currently USB still shows below LAN and still has 9 and 41 in the table. Disconnected USB. Table shows LAN metric as 24 (Not sure why it got changed from 9 and setting got reverted by to automatic) Reconnected USB. Now in the setting still shows 10 and the route table shows 11 for USB and LAN shows 4249 (settings shows 4245, 4 less)) For some reason restarting USB is resetting LAN setting when reconnected. Thanks

    Read the article

  • New Success Story: McGrath RentCorp Improves Business Reporting and Analytics Capabilities with Cloud-based Business Intelligence Solution

    - by LanaProut
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} McGrath RentCorp worked with Jade Global, an Oracle Platinum Partner, to scope, design, and execute the deployment, using its Oracle Accelerate solution to jumpstart the process and accelerate the time to value. Click here to view the full story.

    Read the article

  • dhcp3-server (dhcpd) is tampering with host NIC

    - by user61000
    Hi all, I have a debian box that is serving as a router (using iptables NAT). When first turned on, everything works fine for a few minutes. Then the dhcp server assigns an IP (other than 192.168.0.1) to its' host NIC, eth0. This is NOT what I want. I just want dhcp3-server to listen on eth0, not assign it an IP, and changes the kernel routing table. This of course ruins the NAT capablities of the box. How can I tell the dhcp3-server NOT to do this? Thanks Before dhcp3-server tampers with eth0, the IP is 192.168.0.1, and the routing table looks like this: ~# netstat -r Kernel IP routing table Destination Gateway Iface 192.168.0.0 * eth0 173.33.220.0 * eth1 default 173.33.220.1 eth1 After dhcp3-server tampers with eth0, the IP is 192.168.0.3, and the routing table looks like this: ~# netstat -r Kernel IP routing table Destination Gateway Iface 192.168.0.0 * eth0 173.33.220.0 * eth1 default 192.168.0.1 eth0 default 173.33.220.1 eth1 SETUP Outbound NIC is eth1 Internal NIC is eth0 /etc/network/interfaces ... iface eth0 inet static address 192.168.0.1 netmask 255.255.255.0 /etc/default/dhcp3-server INTERFACES="eth0"

    Read the article

  • Do cross reference database tables have a place in domain driven design?

    - by Mike Cellini
    First some background. Let's say we have a system where a customer is placing an order in a web interface. The items that customer is ordering can priced in various ways. Sometimes including the cost of delivery and sometimes not at all. That pricing effectively depends on a variety of factors including the vendor's own pricing model, that vendor's individual contracts with customers as well as that vendor's contracts with its own suppliers. Let's assume that once a customer places an order for a particular item and chooses a contract if any, the method of delivery can be determined by variables on those contracts. Those delivery methods also live in their own table in the database and have various properties consumed downstream. It makes sense that a cross reference or lookup table would store that information. That table would be loaded into the domain and could then be used to apply the appropriate delivery method while processing the order. Does this make sense in the context of domain driven design? Or is my thinking too relational? Is this logic that should be built into it's own class/method (I mean beyond apply the cross reference table data)?

    Read the article

  • MySQL Unions/Subselects not utilizing keys from associated tables

    - by Brett
    I've noticed by doing EXPLAINs that when a MySQL union between two tables is used, mysql creates a temporary table, but the temp table does not use keys, so queries are slowed considerably. Here is an example: SELECT * FROM ( SELECT `part_number`, `part_manufacturer_clean`, `part_number_clean`, `part_heci`, `part_manufacturer`, `part_description` FROM `new_products` AS `a` UNION SELECT `part` as `part_number`, `manulower` as `part_manufacturer_clean`, `partdeluxe` as `part_number_clean`, `heci` as `part_heci`, `manu` as `part_manufacturer`, `description` as `part_description` FROM `warehouse` AS `b` ) AS `c` WHERE `part_manufacturer_clean` = 'adc' EXPLAIN yields this: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY <derived2> ALL (NULL) (NULL) (NULL) (NULL) 17206 Using where 2 DERIVED a ALL (NULL) (NULL) (NULL) (NULL) 17743 3 UNION b ALL (NULL) (NULL) (NULL) (NULL) 5757 (NULL) UNION RESULT <union2,3> ALL (NULL) (NULL) (NULL) (NULL) (NULL) In this case, part_manufacturer_clean and manulower are keys in both tables. When I don't use the subselects and union, and just use one table, everything works fine. I'm not sure if the issue is with the union or with the subselects. Is there any way to union two tables and still use keys/indexes for performance?

    Read the article

  • To reorganize code, what to choose between library and service?

    - by essbeev
    I want to reorganize a large application with lot of code duplication into multiple components. Plus, some code is also duplicated over other applications. The common set of functionality that can be taken out of main application is clearly defined. Now, do I write a library or do I write a service for this functionality; so that all such applications continue to work and there is only one code-base (of common functionality) to maintain ?

    Read the article

  • How do you handle the task of changing the schema of a production MySQL database?

    - by Continuation
    One of the biggest complaints I have heard about MySQL is that it locks up a table if you try to change its schema like adding a column or adding an index. By "locking up the table" does it mean I can neither read nor write to the table? Sometimes for hours? That seems a pretty severe limitations. I was going to use MySQL for my new project but this gives me pause. Is there a workaround for this? How do you handle the task of changing the schema of your production MySQL database? By the way someone told me Postgresql doesn't have this problem. Is that true - I can both read and write to a Postgresql table while changing its schema? Is there any performance penalty incurred? Would love to hear your experiences.

    Read the article

  • Shared Layout, Style, Images and Javascript

    Nowadays it's very common for a medium/large scale company to have many websites. Users should be presented with the same look and feel in all the wesites for better user experiences. This article shows how to define the common layout, images and styles once and how to use them for all web sites using ASP.NET.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Shared Layout, Style, Images and Javascript

    Nowadays it's very common for a medium/large scale company to have many websites. Users should be presented with the same look and feel in all the wesites for better user experiences. This article shows how to define the common layout, images and styles once and how to use them for all web sites using ASP.NET.

    Read the article

  • Strategy for Incremental Datasource fetchings in Excel

    - by user1352530
    I am in an scenario with a table that is refresh by a third app every week. I need to keep accumulating all data in Excel, using an ODBC connection to the database. I am wondering Approach 1: Is there a way to force Excel to append results for every update (this update would be triggered according to a parameter that indicates week)? I tried to define the table for which the connection loads using a dynamic reference but once is anchored first time, table position is never redefined Approach 2: Use an ETL to accumulate all weekly results into a staging table and then connect Excel to it in real time. But, I would need a mechanism for caching old data, as I cannot grow exponentially the time Excel opens. Imagine after 10 years, Excel would need to update at opening 10 years fo data before showing it. Is there a way to store already fetched data and increment it at real time (when book is opened) by selecting new data (with a query/filter of something) Thanks EDIT: Maybe it's better to ask it that way: What is the optimal strategy for a table that keeps growing and needs to be read in real time by Excel? I just don't want to fetch absolutely all data after some months...

    Read the article

  • Cloud Strategy for Partners Announced at OOW

    - by Cinzia Mascanzoni
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Oracle made a significant announcement about its Cloud Strategy for partners: Oracle has unveiled a comprehensive new set of Oracle Cloud partner programs and enablement resources that help partners to speed time to market with new cloud-based services and solutions and deliver increased value to customers. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} New Oracle PartnerNetwork Cloud offerings include an Oracle Cloud Referral Program, Oracle Cloud Specialization featuring RapidStart and Oracle Cloud Builder Specializations, Oracle Cloud Resale Program and Oracle Platform Services for Independent Software Vendors (ISVs).

    Read the article

  • InnoDb Overhead?

    - by Rimary
    I just converted several large tables from MyISAM to InnoDB. When I view the tables in phpMyAdmin, they are showing a significant amount of overhead (One table has 6.8GB). Optimizing the tables (which isn't a supported command on InnoDB) has no affect like it does on MyISAM. Is this a result of InnoDB having the ever growing data file that never returns space even after deletes? If that's the case, I've never seen overhead like this before from other InnoDB tables. Is there a way to clean this up? Edit: Here are the things I've tried (with no success): Optimize Table Reorder table by primary key Defragment table

    Read the article

  • ubuntu 12.04 installation problem on windows 7 64bit

    - by zakariya
    06-26 20:57 ERROR TaskList: Extraction failed with code: 2 Traceback (most recent call last): File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\win32\backend.py", line 450, in extract_diskimage Exception: Extraction failed with code: 2 06-26 20:57 DEBUG TaskList: # Cancelling tasklist 06-26 20:57 DEBUG TaskList: # Finished tasklist 06-26 20:57 ERROR root: Extraction failed with code: 2 Traceback (most recent call last): File "\lib\wubi\application.py", line 58, in run File "\lib\wubi\application.py", line 132, in select_task File "\lib\wubi\application.py", line 158, in run_installer File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\win32\backend.py", line 450, in extract_diskimage Exception: Extraction failed with code: 2

    Read the article

  • Rebuild the index with REINDEX [closed]

    - by kuttyarif
    WARNING: index "pk_alarmid" contains 1363436 row versions, but table contains 26 row versions HINT: Rebuild the index with REINDEX. WARNING: index "alarm_uei_idx" contains 1363434 row versions, but table contains 26 row versions HINT: Rebuild the index with REINDEX. WARNING: index "alarm_nodeid_idx" contains 1363434 row versions, but table contains 26 row versions HINT: Rebuild the index with REINDEX.

    Read the article

  • You cannot do cross joins in SQL Azure but there is a way around that....

    - by SeanBarlow
    So I was asked today how to do cross joins in SQL Azure using Linq. Well the simple answer is you cant do it. It is not supported but there are ways around that. The solution is actually very simple and easy to implement. So here is what I did and how I did it. I created two SQL Azure Databases. The first Database is called AccountDb and has a single table named Account, which has an ID, CompanyId and Name in it. The second database I called CompanyDb and it contains two tables. The first table I named Company and the second I named Address. The Company Table has an Id and Name column. The Address Table has an Id and CompanyId columns. Since we cannot do cross joins in Azure we have to have one of the models preloaded with data. I simply put the Accounts into a List of accounts and use that in my join.   var accounts = new AccountsModelContainer().Accounts.ToList(); var companies = new CompanyModelContainer().Companies; var query = from account in accounts             join company in                 (                       from c in companies                      select c                  ) on account.CompanyId equals company.Id             select new AccountView() {                                               AccountName = account.Name, CompanyName = company.Name,                                 Addresses = company.Addresses                         }; return query.ToList();   So as long as you have your data loaded from one of the contexts you can still execute your queries and get the data back that you want.

    Read the article

  • MySQL reclaim index space after large delete?

    - by cdunn
    After performing a large delete in MySQL, I understand you need to run a NULL ALTER to reclaim disk space, is this also true for reclaiming index space? We have tables using 10G of index space and have deleted/archived large chunks of this data and unsure if we need to rebuild the table in order to decrease the size of the index. Can anyone offer any advice? We are trying to avoid rebuilding the table since it would take quite awhile and lock the table. Thanks!

    Read the article

< Previous Page | 338 339 340 341 342 343 344 345 346 347 348 349  | Next Page >