Search Results

Search found 111248 results on 4450 pages for 'end user computing'.

Page 373/4450 | < Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >

  • Currency Conversion in Oracle BI applications

    - by Saurabh Verma
    Authored by Vijay Aggarwal and Hichem Sellami A typical data warehouse contains Star and/or Snowflake schema, made up of Dimensions and Facts. The facts store various numerical information including amounts. Example; Order Amount, Invoice Amount etc. With the true global nature of business now-a-days, the end-users want to view the reports in their own currency or in global/common currency as defined by their business. This presents a unique opportunity in BI to provide the amounts in converted rates either by pre-storing or by doing on-the-fly conversions while displaying the reports to the users. Source Systems OBIA caters to various source systems like EBS, PSFT, Sebl, JDE, Fusion etc. Each source has its own unique and intricate ways of defining and storing currency data, doing currency conversions and presenting to the OLTP users. For example; EBS stores conversion rates between currencies which can be classified by conversion rates, like Corporate rate, Spot rate, Period rate etc. Siebel stores exchange rates by conversion rates like Daily. EBS/Fusion stores the conversion rates for each day, where as PSFT/Siebel store for a range of days. PSFT has Rate Multiplication Factor and Rate Division Factor and we need to calculate the Rate based on them, where as other Source systems store the Currency Exchange Rate directly. OBIA Design The data consolidation from various disparate source systems, poses the challenge to conform various currencies, rate types, exchange rates etc., and designing the best way to present the amounts to the users without affecting the performance. When consolidating the data for reporting in OBIA, we have designed the mechanisms in the Common Dimension, to allow users to report based on their required currencies. OBIA Facts store amounts in various currencies: Document Currency: This is the currency of the actual transaction. For a multinational company, this can be in various currencies. Local Currency: This is the base currency in which the accounting entries are recorded by the business. This is generally defined in the Ledger of the company. Global Currencies: OBIA provides five Global Currencies. Three are used across all modules. The last two are for CRM only. A Global currency is very useful when creating reports where the data is viewed enterprise-wide. Example; a US based multinational would want to see the reports in USD. The company will choose USD as one of the global currencies. OBIA allows users to define up-to five global currencies during the initial implementation. The term Currency Preference is used to designate the set of values: Document Currency, Local Currency, Global Currency 1, Global Currency 2, Global Currency 3; which are shared among all modules. There are four more currency preferences, specific to certain modules: Global Currency 4 (aka CRM Currency) and Global Currency 5 which are used in CRM; and Project Currency and Contract Currency, used in Project Analytics. When choosing Local Currency for Currency preference, the data will show in the currency of the Ledger (or Business Unit) in the prompt. So it is important to select one Ledger or Business Unit when viewing data in Local Currency. More on this can be found in the section: Toggling Currency Preferences in the Dashboard. Design Logic When extracting the fact data, the OOTB mappings extract and load the document amount, and the local amount in target tables. It also loads the exchange rates required to convert the document amount into the corresponding global amounts. If the source system only provides the document amount in the transaction, the extract mapping does a lookup to get the Local currency code, and the Local exchange rate. The Load mapping then uses the local currency code and rate to derive the local amount. The load mapping also fetches the Global Currencies and looks up the corresponding exchange rates. The lookup of exchange rates is done via the Exchange Rate Dimension provided as a Common/Conforming Dimension in OBIA. The Exchange Rate Dimension stores the exchange rates between various currencies for a date range and Rate Type. Two physical tables W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are used to provide the lookups and conversions between currencies. The data is loaded from the source system’s Ledger tables. W_EXCH_RATE_G stores the exchange rates between currencies with a date range. On the other hand, W_GLOBAL_EXCH_RATE_G stores the currency conversions between the document currency and the pre-defined five Global Currencies for each day. Based on the requirements, the fact mappings can decide and use one or both tables to do the conversion. Currency design in OBIA also taps into the MLS and Domain architecture, thus allowing the users to map the currencies to a universal Domain during the implementation time. This is especially important for companies deploying and using OBIA with multiple source adapters. Some Gotchas to Look for It is necessary to think through the currencies during the initial implementation. 1) Identify various types of currencies that are used by your business. Understand what will be your Local (or Base) and Documentation currency. Identify various global currencies that your users will want to look at the reports. This will be based on the global nature of your business. Changes to these currencies later in the project, while permitted, but may cause Full data loads and hence lost time. 2) If the user has a multi source system make sure that the Global Currencies and Global Rate Types chosen in Configuration Manager do have the corresponding source specific counterparts. In other words, make sure for every DW specific value chosen for Currency Code or Rate Type, there is a source Domain mapping already done. Technical Section This section will briefly mention the technical scenarios employed in the OBIA adaptors to extract data from each source system. In OBIA, we have two main tables which store the Currency Rate information as explained in previous sections. W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are the two tables. W_EXCH_RATE_G stores all the Currency Conversions present in the source system. It captures data for a Date Range. W_GLOBAL_EXCH_RATE_G has Global Currency Conversions stored at a Daily level. However the challenge here is to store all the 5 Global Currency Exchange Rates in a single record for each From Currency. Let’s voyage further into the Source System Extraction logic for each of these tables and understand the flow briefly. EBS: In EBS, we have Currency Data stored in GL_DAILY_RATES table. As the name indicates GL_DAILY_RATES EBS table has data at a daily level. However in our warehouse we store the data with a Date Range and insert a new range record only when the Exchange Rate changes for a particular From Currency, To Currency and Rate Type. Below are the main logical steps that we employ in this process. (Incremental Flow only) – Cleanup the data in W_EXCH_RATE_G. Delete the records which have Start Date > minimum conversion date Update the End Date of the existing records. Compress the daily data from GL_DAILY_RATES table into Range Records. Incremental map uses $$XRATE_UPD_NUM_DAY as an extra parameter. Generate Previous Rate, Previous Date and Next Date for each of the Daily record from the OLTP. Filter out the records which have Conversion Rate same as Previous Rates or if the Conversion Date lies within a single day range. Mark the records as ‘Keep’ and ‘Filter’ and also get the final End Date for the single Range record (Unique Combination of From Date, To Date, Rate and Conversion Date). Filter the records marked as ‘Filter’ in the INFA map. The above steps will load W_EXCH_RATE_GS. Step 0 updates/deletes W_EXCH_RATE_G directly. SIL map will then insert/update the GS data into W_EXCH_RATE_G. These steps convert the daily records in GL_DAILY_RATES to Range records in W_EXCH_RATE_G. We do not need such special logic for loading W_GLOBAL_EXCH_RATE_G. This is a table where we store data at a Daily Granular Level. However we need to pivot the data because the data present in multiple rows in source tables needs to be stored in different columns of the same row in DW. We use GROUP BY and CASE logic to achieve this. Fusion: Fusion has extraction logic very similar to EBS. The only difference is that the Cleanup logic that was mentioned in step 0 above does not use $$XRATE_UPD_NUM_DAY parameter. In Fusion we bring all the Exchange Rates in Incremental as well and do the cleanup. The SIL then takes care of Insert/Updates accordingly. PeopleSoft:PeopleSoft does not have From Date and To Date explicitly in the Source tables. Let’s look at an example. Please note that this is achieved from PS1 onwards only. 1 Jan 2010 – USD to INR – 45 31 Jan 2010 – USD to INR – 46 PSFT stores records in above fashion. This means that Exchange Rate of 45 for USD to INR is applicable for 1 Jan 2010 to 30 Jan 2010. We need to store data in this fashion in DW. Also PSFT has Exchange Rate stored as RATE_MULT and RATE_DIV. We need to do a RATE_MULT/RATE_DIV to get the correct Exchange Rate. We generate From Date and To Date while extracting data from source and this has certain assumptions: If a record gets updated/inserted in the source, it will be extracted in incremental. Also if this updated/inserted record is between other dates, then we also extract the preceding and succeeding records (based on dates) of this record. This is required because we need to generate a range record and we have 3 records whose ranges have changed. Taking the same example as above, if there is a new record which gets inserted on 15 Jan 2010; the new ranges are 1 Jan to 14 Jan, 15 Jan to 30 Jan and 31 Jan to Next available date. Even though 1 Jan record and 31 Jan have not changed, we will still extract them because the range is affected. Similar logic is used for Global Exchange Rate Extraction. We create the Range records and get it into a Temporary table. Then we join to Day Dimension, create individual records and pivot the data to get the 5 Global Exchange Rates for each From Currency, Date and Rate Type. Siebel: Siebel Facts are dependent on Global Exchange Rates heavily and almost none of them really use individual Exchange Rates. In other words, W_GLOBAL_EXCH_RATE_G is the main table used in Siebel from PS1 release onwards. As of January 2002, the Euro Triangulation method for converting between currencies belonging to EMU members is not needed for present and future currency exchanges. However, the method is still available in Siebel applications, as are the old currencies, so that historical data can be maintained accurately. The following description applies only to historical data needing conversion prior to the 2002 switch to the Euro for the EMU member countries. If a country is a member of the European Monetary Union (EMU), you should convert its currency to other currencies through the Euro. This is called triangulation, and it is used whenever either currency being converted has EMU Triangulation checked. Due to this, there are multiple extraction flows in SEBL ie. EUR to EMU, EUR to NonEMU, EUR to DMC and so on. We load W_EXCH_RATE_G through multiple flows with these data. This has been kept same as previous versions of OBIA. W_GLOBAL_EXCH_RATE_G being a new table does not have such needs. However SEBL does not have From Date and To Date columns in the Source tables similar to PSFT. We use similar extraction logic as explained in PSFT section for SEBL as well. What if all 5 Global Currencies configured are same? As mentioned in previous sections, from PS1 onwards we store Global Exchange Rates in W_GLOBAL_EXCH_RATE_G table. The extraction logic for this table involves Pivoting data from multiple rows into a single row with 5 Global Exchange Rates in 5 columns. As mentioned in previous sections, we use CASE and GROUP BY functions to achieve this. This approach poses a unique problem when all the 5 Global Currencies Chosen are same. For example – If the user configures all 5 Global Currencies as ‘USD’ then the extract logic will not be able to generate a record for From Currency=USD. This is because, not all Source Systems will have a USD->USD conversion record. We have _Generated mappings to take care of this case. We generate a record with Conversion Rate=1 for such cases. Reusable Lookups Before PS1, we had a Mapplet for Currency Conversions. In PS1, we only have reusable Lookups- LKP_W_EXCH_RATE_G and LKP_W_GLOBAL_EXCH_RATE_G. These lookups have another layer of logic so that all the lookup conditions are met when they are used in various Fact Mappings. Any user who would want to do a LKP on W_EXCH_RATE_G or W_GLOBAL_EXCH_RATE_G should and must use these Lookups. A direct join or Lookup on the tables might lead to wrong data being returned. Changing Currency preferences in the Dashboard: In the 796x series, all amount metrics in OBIA were showing the Global1 amount. The customer needed to change the metric definitions to show them in another Currency preference. Project Analytics started supporting currency preferences since 7.9.6 release though, and it published a Tech note for other module customers to add toggling between currency preferences to the solution. List of Currency Preferences Starting from 11.1.1.x release, the BI Platform added a new feature to support multiple currencies. The new session variable (PREFERRED_CURRENCY) is populated through a newly introduced currency prompt. This prompt can take its values from the xml file: userpref_currencies_OBIA.xml, which is hosted in the BI Server installation folder, under :< home>\instances\instance1\config\OracleBIPresentationServicesComponent\coreapplication_obips1\userpref_currencies.xml This file contains the list of currency preferences, like“Local Currency”, “Global Currency 1”,…which customers can also rename to give them more meaningful business names. There are two options for showing the list of currency preferences to the user in the dashboard: Static and Dynamic. In Static mode, all users will see the full list as in the user preference currencies file. In the Dynamic mode, the list shown in the currency prompt drop down is a result of a dynamic query specified in the same file. Customers can build some security into the rpd, so the list of currency preferences will be based on the user roles…BI Applications built a subject area: “Dynamic Currency Preference” to run this query, and give every user only the list of currency preferences required by his application roles. Adding Currency to an Amount Field When the user selects one of the items from the currency prompt, all the amounts in that page will show in the Currency corresponding to that preference. For example, if the user selects “Global Currency1” from the prompt, all data will be showing in Global Currency 1 as specified in the Configuration Manager. If the user select “Local Currency”, all amount fields will show in the Currency of the Business Unit selected in the BU filter of the same page. If there is no particular Business Unit selected in that filter, and the data selected by the query contains amounts in more than one currency (for example one BU has USD as a functional currency, the other has EUR as functional currency), then subtotals will not be available (cannot add USD and EUR amounts in one field), and depending on the set up (see next paragraph), the user may receive an error. There are two ways to add the Currency field to an amount metric: In the form of currency code, like USD, EUR…For this the user needs to add the field “Apps Common Currency Code” to the report. This field is in every subject area, usually under the table “Currency Tag” or “Currency Code”… In the form of currency symbol ($ for USD, € for EUR,…) For this, the user needs to format the amount metrics in the report as a currency column, by specifying the currency tag column in the Column Properties option in Column Actions drop down list. Typically this column should be the “BI Common Currency Code” available in every subject area. Select Column Properties option in the Edit list of a metric. In the Data Format tab, select Custom as Treat Number As. Enter the following syntax under Custom Number Format: [$:currencyTagColumn=Subjectarea.table.column] Where Column is the “BI Common Currency Code” defined to take the currency code value based on the currency preference chosen by the user in the Currency preference prompt.

    Read the article

  • Grub 'Read Error' - Only Loads with LiveCD

    - by Ryan Sharp
    Problem After installing Ubuntu to complete my Windows 7/Ubuntu 12.04 dual-boot setup, Grub just wouldn't load at all unless I boot from the LiveCD. Afterwards, everything works completely normal. However, this workaround isn't a solution and I'd like to be able to boot without the aid of a disc. Fdisk -l Using the fdisk -l command, I am given the following: Disk /dev/sda: 64.0 GB, 64023257088 bytes 255 heads, 63 sectors/track, 7783 cylinders, total 125045424 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x324971d1 Device Boot Start End Blocks Id System /dev/sda1 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 208896 48957439 24374272 7 HPFS/NTFS/exFAT /dev/sda3 * 48959486 124067839 37554177 5 Extended /dev/sda5 48959488 124067839 37554176 83 Linux Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xc0ee6a69 Device Boot Start End Blocks Id System /dev/sdb1 1024208894 1953523711 464657409 5 Extended /dev/sdb3 * 2048 1024206847 512102400 7 HPFS/NTFS/exFAT /dev/sdb5 1024208896 1937897471 456844288 83 Linux /dev/sdb6 1937899520 1953523711 7812096 82 Linux swap / Solaris Partition table entries are not in disk order Disk /dev/sdc: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x292eee23 Device Boot Start End Blocks Id System /dev/sdc1 2048 625141759 312569856 7 HPFS/NTFS/exFAT Bootinfoscript I've used the BootInfoScript, and received the following output: Boot Info Script 0.61 [1 April 2012] ============================= Boot Info Summary: =============================== => Grub2 (v1.99) is installed in the MBR of /dev/sda and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks for (,msdos5)/boot/grub on this drive. => Grub2 (v1.99) is installed in the MBR of /dev/sdb and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks for (,msdos5)/boot/grub on this drive. => Windows is installed in the MBR of /dev/sdc. sda1: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: /bootmgr /Boot/BCD sda2: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows 7 Boot files: /bootmgr /Boot/BCD /Windows/System32/winload.exe sda3: __________________________________________________________________________ File system: Extended Partition Boot sector type: Unknown Boot sector info: sda5: __________________________________________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Ubuntu 12.04.1 LTS Boot files: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img sdb1: __________________________________________________________________________ File system: Extended Partition Boot sector type: - Boot sector info: sdb5: __________________________________________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Boot files: sdb6: __________________________________________________________________________ File system: swap Boot sector type: - Boot sector info: sdb3: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: According to the info in the boot sector, sdb3 starts at sector 200744960. But according to the info from fdisk, sdb3 starts at sector 2048. According to the info in the boot sector, sdb3 has 823461887 sectors, but according to the info from fdisk, it has 1024204799 sectors. Operating System: Boot files: sdc1: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: ============================ Drive/Partition Info: ============================= Drive: sda _____________________________________________________________________ Disk /dev/sda: 64.0 GB, 64023257088 bytes 255 heads, 63 sectors/track, 7783 cylinders, total 125045424 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 2,048 206,847 204,800 7 NTFS / exFAT / HPFS /dev/sda2 208,896 48,957,439 48,748,544 7 NTFS / exFAT / HPFS /dev/sda3 * 48,959,486 124,067,839 75,108,354 5 Extended /dev/sda5 48,959,488 124,067,839 75,108,352 83 Linux Drive: sdb _____________________________________________________________________ Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sdb1 1,024,208,894 1,953,523,711 929,314,818 5 Extended /dev/sdb5 1,024,208,896 1,937,897,471 913,688,576 83 Linux /dev/sdb6 1,937,899,520 1,953,523,711 15,624,192 82 Linux swap / Solaris /dev/sdb3 * 2,048 1,024,206,847 1,024,204,800 7 NTFS / exFAT / HPFS Drive: sdc _____________________________________________________________________ Disk /dev/sdc: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sdc1 2,048 625,141,759 625,139,712 7 NTFS / exFAT / HPFS "blkid" output: ________________________________________________________________ Device UUID TYPE LABEL /dev/sda1 A48056DF8056B80E ntfs System Reserved /dev/sda2 A8C6D6A4C6D671D4 ntfs Windows /dev/sda5 fd71c537-3715-44e1-b1fe-07537e22b3dd ext4 /dev/sdb3 6373D03D0A3747A8 ntfs Steam /dev/sdb5 6f5a6eb3-a932-45aa-893e-045b57708270 ext4 /dev/sdb6 469848c8-867a-41b7-b0e1-b813a43c64af swap /dev/sdc1 725D7B961CF34B1B ntfs backup ================================ Mount points: ================================= Device Mount_Point Type Options /dev/sda5 / ext4 (rw,noatime,nodiratime,discard,errors=remount-ro) /dev/sdb5 /home ext4 (rw) =========================== sda5/boot/grub/grub.cfg: =========================== -------------------------------------------------------------------------------- # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="0" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga insmod video_bochs insmod video_cirrus } insmod part_msdos insmod ext2 set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set=root fd71c537-3715-44e1-b1fe-07537e22b3dd if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=auto load_video insmod gfxterm insmod part_msdos insmod ext2 set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set=root fd71c537-3715-44e1-b1fe-07537e22b3dd set locale_dir=($root)/boot/grub/locale set lang=en_GB insmod gettext fi terminal_output gfxterm if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 44,0,30; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### function gfxmode { set gfxpayload="${1}" if [ "${1}" = "keep" ]; then set vt_handoff=vt.handoff=7 else set vt_handoff= fi } if [ "${recordfail}" != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode if [ "${linux_gfx_mode}" != "text" ]; then load_video; fi menuentry 'Ubuntu, with Linux 3.2.0-29-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set=root fd71c537-3715-44e1-b1fe-07537e22b3dd linux /boot/vmlinuz-3.2.0-29-generic root=UUID=fd71c537-3715-44e1-b1fe-07537e22b3dd ro quiet splash $vt_handoff initrd /boot/initrd.img-3.2.0-29-generic } menuentry 'Ubuntu, with Linux 3.2.0-29-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_msdos insmod ext2 set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set=root fd71c537-3715-44e1-b1fe-07537e22b3dd echo 'Loading Linux 3.2.0-29-generic ...' linux /boot/vmlinuz-3.2.0-29-generic root=UUID=fd71c537-3715-44e1-b1fe-07537e22b3dd ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.2.0-29-generic } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_msdos insmod ext2 set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set=root fd71c537-3715-44e1-b1fe-07537e22b3dd linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_msdos insmod ext2 set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set=root fd71c537-3715-44e1-b1fe-07537e22b3dd linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry "Windows 7 (loader) (on /dev/sda1)" --class windows --class os { insmod part_msdos insmod ntfs set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set=root A48056DF8056B80E chainloader +1 } menuentry "Windows 7 (loader) (on /dev/sda2)" --class windows --class os { insmod part_msdos insmod ntfs set root='(hd0,msdos2)' search --no-floppy --fs-uuid --set=root A8C6D6A4C6D671D4 chainloader +1 } ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### -------------------------------------------------------------------------------- =============================== sda5/etc/fstab: ================================ -------------------------------------------------------------------------------- # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda5 during installation UUID=fd71c537-3715-44e1-b1fe-07537e22b3dd / ext4 noatime,nodiratime,discard,errors=remount-ro 0 1 # /home was on /dev/sdb5 during installation UUID=6f5a6eb3-a932-45aa-893e-045b57708270 /home ext4 defaults 0 2 # swap was on /dev/sdb6 during installation UUID=469848c8-867a-41b7-b0e1-b813a43c64af none swap sw 0 0 tmpfs /tmp tmpfs defaults,noatime,mode=1777 0 0 -------------------------------------------------------------------------------- =================== sda5: Location of files loaded by Grub: ==================== GiB - GB File Fragment(s) = boot/grub/core.img 1 = boot/grub/grub.cfg 1 = boot/initrd.img-3.2.0-29-generic 2 = boot/vmlinuz-3.2.0-29-generic 1 = initrd.img 2 = vmlinuz 1 ======================== Unknown MBRs/Boot Sectors/etc: ======================== Unknown BootLoader on sda3 00000000 63 6f 70 69 61 20 65 20 63 6f 6c 61 41 63 65 64 |copia e colaAced| 00000010 65 72 20 61 20 74 6f 64 6f 20 6f 20 74 65 78 74 |er a todo o text| 00000020 6f 20 66 61 6c 61 64 6f 20 75 74 69 6c 69 7a 61 |o falado utiliza| 00000030 6e 64 6f 20 61 20 63 6f 6e 76 65 72 73 c3 a3 6f |ndo a convers..o| 00000040 20 64 65 20 74 65 78 74 6f 20 70 61 72 61 20 76 | de texto para v| 00000050 6f 7a 4d 61 6e 69 70 75 6c 61 72 20 61 73 20 64 |ozManipular as d| 00000060 65 66 69 6e 69 c3 a7 c3 b5 65 73 20 71 75 65 20 |efini....es que | 00000070 63 6f 6e 74 72 6f 6c 61 6d 20 6f 20 61 63 65 73 |controlam o aces| 00000080 73 6f 20 64 65 20 57 65 62 73 69 74 65 73 20 61 |so de Websites a| 00000090 20 63 6f 6f 6b 69 65 73 2c 20 4a 61 76 61 53 63 | cookies, JavaSc| 000000a0 72 69 70 74 20 65 20 70 6c 75 67 2d 69 6e 73 4d |ript e plug-insM| 000000b0 61 6e 69 70 75 6c 61 72 20 61 73 20 64 65 66 69 |anipular as defi| 000000c0 6e 69 c3 a7 c3 b5 65 73 20 72 65 6c 61 63 69 6f |ni....es relacio| 000000d0 6e 61 64 61 73 20 63 6f 6d 20 70 72 69 76 61 63 |nadas com privac| 000000e0 69 64 61 64 65 41 63 65 64 65 72 20 61 6f 73 20 |idadeAceder aos | 000000f0 73 65 75 73 20 70 65 72 69 66 c3 a9 72 69 63 6f |seus perif..rico| 00000100 73 20 55 53 42 55 74 69 6c 69 7a 61 72 20 6f 20 |s USBUtilizar o | 00000110 73 65 75 20 6d 69 63 72 6f 66 6f 6e 65 55 74 69 |seu microfoneUti| 00000120 6c 69 7a 61 72 20 61 20 73 75 61 20 63 c3 a2 6d |lizar a sua c..m| 00000130 61 72 61 55 74 69 6c 69 7a 61 72 20 6f 20 73 65 |araUtilizar o se| 00000140 75 20 6d 69 63 72 6f 66 6f 6e 65 20 65 20 61 20 |u microfone e a | 00000150 63 c3 a2 6d 61 72 61 4e c3 a3 6f 20 66 6f 69 20 |c..maraN..o foi | 00000160 70 6f 73 73 c3 ad 76 65 6c 20 65 6e 63 6f 6e 74 |poss..vel encont| 00000170 72 61 72 20 6f 20 63 61 6d 69 6e 68 6f 20 61 62 |rar o caminho ab| 00000180 73 6f 6c 75 74 6f 20 70 61 72 61 20 6f 20 64 69 |soluto para o di| 00000190 72 65 63 74 c3 b3 72 69 6f 20 61 20 65 6d 70 61 |rect..rio a empa| 000001a0 63 6f 74 61 72 2e 4f 20 64 69 72 65 63 74 c3 b3 |cotar.O direct..| 000001b0 72 69 6f 20 64 65 20 65 6e 74 72 61 64 61 00 fe |rio de entrada..| 000001c0 ff ff 83 fe ff ff 02 00 00 00 00 10 7a 04 00 00 |............z...| 000001d0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 000001f0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 55 aa |..............U.| 00000200 =============================== StdErr Messages: =============================== xz: (stdin): Compressed data is corrupt xz: (stdin): Compressed data is corrupt awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in Begging / Appreciation ;) If anything else is required to solve my problem, please ask. My only hopes are that I can solve this, and that doing so won't require re-installation of Grub due to how complicated the procedures are, or that I would be needed to reinstall the OS', as I have done so about six times already since friday due to several other issues I've encountered. Thank you, and good day. System Ubuntu 12.04 64-bit / Windows 7 SP1 64-bit 64GB SSD as boot/OS drive, 1TB HDD as /Home Swap and Steam drive.

    Read the article

  • Do I need to be worried about these SMART drive temperatures?

    - by Steve Lorimer
    I have 5 hard drives in a machine sitting in a cupboard. /dev/sda is a 500GB Seagate drive, and is the boot disk. /dev/sd{b,c,d,e} are 2TB drives in a raid6 configuration. smartctl is showing significantly higher temperatures (like ~140 degrees celsius) on the raid drives than the boot drive. Do I need to be worried? /dev/sdb and /dev/sde are new Western Digital Black drives (new=1 week) /dev/sdc and /dev/sdd are 5 year old Hitachi drives /dev/sda [SAT], Temperature_Celsius changed from 40 to 39 /dev/sdc [SAT], Temperature_Celsius changed from 142 to 146 /dev/sdc [SAT], Temperature_Celsius changed from 146 to 142 /dev/sdd [SAT], Temperature_Celsius changed from 142 to 146 /dev/sda [SAT], Airflow_Temperature_Cel changed from 61 to 62 /dev/sda [SAT], Temperature_Celsius changed from 39 to 38 /dev/sde [SAT], Temperature_Celsius changed from 107 to 108 /dev/sdb [SAT], Temperature_Celsius changed from 108 to 109 /dev/sdc [SAT], Temperature_Celsius changed from 146 to 150 /dev/sdc [SAT], Temperature_Celsius changed from 146 to 150 /dev/sda [SAT], Airflow_Temperature_Cel changed from 62 to 61 /dev/sda [SAT], Temperature_Celsius changed from 38 to 39 Update: Adding detailed drive information as per request: /dev/sda =========================== smartctl 6.0 2012-10-10 r3643 [x86_64-linux-3.9.10-100.fc17.x86_64] (local build) Copyright (C) 2002-12, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Seagate Pipeline HD 5900.2 Device Model: ST3500312CS Serial Number: 5VV47HXA LU WWN Device Id: 5 000c50 02aad5ad6 Firmware Version: SC13 User Capacity: 500,107,862,016 bytes [500 GB] Sector Size: 512 bytes logical/physical Rotation Rate: 5900 rpm Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS T13/1699-D revision 4 SATA Version is: SATA 2.6, 1.5 Gb/s (current: 1.5 Gb/s) Local Time is: Tue Jun 3 10:54:11 2014 EST SMART support is: Available - device has SMART capability. SMART support is: Enabled /dev/sdb =========================== smartctl 6.0 2012-10-10 r3643 [x86_64-linux-3.9.10-100.fc17.x86_64] (local build) Copyright (C) 2002-12, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Device Model: WDC WD2003FZEX-00Z4SA0 Serial Number: WD-WMC1F1398726 LU WWN Device Id: 5 0014ee 003b8bd25 Firmware Version: 01.01A01 User Capacity: 2,000,398,934,016 bytes [2.00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate: 7200 rpm Device is: Not in smartctl database [for details use: -P showall] ATA Version is: ACS-2 (minor revision not indicated) SATA Version is: SATA 3.0, 6.0 Gb/s (current: 3.0 Gb/s) Local Time is: Tue Jun 3 10:54:11 2014 EST SMART support is: Available - device has SMART capability. SMART support is: Enabled /dev/sdc =========================== smartctl 6.0 2012-10-10 r3643 [x86_64-linux-3.9.10-100.fc17.x86_64] (local build) Copyright (C) 2002-12, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Hitachi Deskstar 7K3000 Device Model: Hitachi HDS723020BLA642 Serial Number: MN1220F30WSTUD LU WWN Device Id: 5 000cca 369cc9f5d Firmware Version: MN6OA580 User Capacity: 2,000,398,934,016 bytes [2.00 TB] Sector Size: 512 bytes logical/physical Rotation Rate: 7200 rpm Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS T13/1699-D revision 4 SATA Version is: SATA 2.6, 6.0 Gb/s (current: 3.0 Gb/s) Local Time is: Tue Jun 3 10:54:11 2014 EST SMART support is: Available - device has SMART capability. SMART support is: Enabled /dev/sdd =========================== smartctl 6.0 2012-10-10 r3643 [x86_64-linux-3.9.10-100.fc17.x86_64] (local build) Copyright (C) 2002-12, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Hitachi Deskstar 7K3000 Device Model: Hitachi HDS723020BLA642 Serial Number: MN1220F30WST4D LU WWN Device Id: 5 000cca 369cc9f48 Firmware Version: MN6OA580 User Capacity: 2,000,398,934,016 bytes [2.00 TB] Sector Size: 512 bytes logical/physical Rotation Rate: 7200 rpm Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS T13/1699-D revision 4 SATA Version is: SATA 2.6, 6.0 Gb/s (current: 1.5 Gb/s) Local Time is: Tue Jun 3 10:54:11 2014 EST SMART support is: Available - device has SMART capability. SMART support is: Enabled /dev/sde =========================== smartctl 6.0 2012-10-10 r3643 [x86_64-linux-3.9.10-100.fc17.x86_64] (local build) Copyright (C) 2002-12, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Device Model: WDC WD2003FZEX-00Z4SA0 Serial Number: WD-WMC1F1483782 LU WWN Device Id: 5 0014ee 3002d235c Firmware Version: 01.01A01 User Capacity: 2,000,398,934,016 bytes [2.00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate: 7200 rpm Device is: Not in smartctl database [for details use: -P showall] ATA Version is: ACS-2 (minor revision not indicated) SATA Version is: SATA 3.0, 6.0 Gb/s (current: 1.5 Gb/s) Local Time is: Tue Jun 3 10:54:11 2014 EST SMART support is: Available - device has SMART capability. SMART support is: Enabled

    Read the article

  • Outlook 2007 Does Not Accept Login Credentials, OWA Webmail Does. Troubleshooting Advice?

    - by Chris
    I am trying to connect Outlook 2007 to Exchange (Hosted Exchange from Rackspace). Soon, I will need to roll this out for our entire office. With the Exchange account added to Outlook, Outlook starts up and asks for the user's username and password. Unfortunately, it doesn't like the password I use for it. I can confirm this username (email address) and password combo works by using Outlook WebMail, and another user (in another network/office) confirmed the Exchange account does work within his Outlook client. In my network/office, I can confirm that an Outlook 2007 client (under Windows 7) can connect to the Hosted Exchange server from Rackspace. However, I have not been able to get Outlook 2007 (under Windows XP SP3) to connect to the very same Exchange server Outlook 2007 (under Windows 7) can connect to. Outlook continuously prompts me for the username and password and does not accept the correct combination. Now, regarding the Outlook client that cannot connect/login to Exchange: The user has full admin rights on the workstation We do not run a domain controller/LDAP The firewall on the workstation has been disabled Real time file scanning in Microsoft Security Essentials has been disabled There are no virus scanning applications that would interface with Outlook or an email server. The Exchange account is setup to run on a newly created Outlook profile The network firewall does not log any blocked attempts A packet capture at the router reveals communication between the workstation and the Exchange server or proxy (though, this is SSL encrypted, so I don't know what the computers are saying) I have applied a fix (Added DWORD value of 0 for DefConnectOpts under HKEY_CURRENT_USER\Software\Microsoft\Office\12.0\Outlook\RPC) that was recommended to make RPC function when the workstation does not have a default gateway set. Workstation is configured as DHCP. This fix did nothing, and it may be worth noting the RPC subkey was not present until I added it. RPC service is running on the workstation The program is not running under any compatibility mode. Side note: Outlook 2007 installs with compatibility mode for XP enabled by default in windows 7. Outlook 2007 will not even try to connect to exchange if this compatibility mode is checked. In windows xp, I tried checking compatibility mode for windows 2000, and was unable to connect to exchange as well. Here is the specific configuration I've used in a blank outlook profile: Microsoft Exchange Server: ##MASKED##-MBX-C18.mex07a.mlsrvr.com Username: (Full Email Address: [email protected]) Password: ##MASKED## Outlook Anywhere: Connect to Microsoft Exchange using HTTP Exchange Proxy Settings: Proxy Server: mex07a.emailsrvr.com Check "Connect using SSL only" Under "Only connect to proxy servers...", enter: msstd:mex07a.emailsrvr.com Check "On fast networks, connect using HTTP first, then connect using TCP/IP" Check "On slow networks, connect using HTTP first, then connect using TCP/IP" Proxy authentication settings: Basic Authentication Notes: mex07a.mlsrvr.com and mex07a.emailsrvr.com may look incorrect at first glance, but this is not a typo - these instructions were handed down from rackspace and are confirmed to be working, just not on this workstation. I have tried to use the RpcPing utility but must have been using it wrong. I got as far as "Bad Interface Descriptor". It would seem to me getting Outlook and Exchange to work together would be a breeze, especially since everything is done over port 80 with web services. Unfortunately, the user is stuck with WebMail access only, because Outlook won't accept the Exchange credentials. Do you have any ideas of other things I could try to debug this issue further? Any and all help is greatly appreciated. Thank you! -Chris

    Read the article

  • What NAS setup for two-way syncing over the internet?

    - by Jamse
    I have family living a few hours away and have a lot of files that I would like to share - especially lots of folders of digital photos, but also documents etc. - partially so they can see them, partially so I can have access when I visit them and partially for backup / redundancy purposes. My current hard drives on my main machine are getting pretty full anyway, and I have a MythTV box where my music is currently stored, so I was thinking of getting a NAS anyway. And at the other end my family have a few computers, so they would probably benefit from a NAS too. My general idea (though I'm willing to shift on this if there are any bright ideas about other ways of achieving my objectives) is to get a matching pair of NASs and have them sync over the internet. (To cut down on bandwidth use I would get them in sync locally to start with.) Having read around as best I can it seems that syncing over the internet is generally only a feature on quite high end units. However, I have seen that QNAP seem to feature this on their TS-110 and TS-210 units, which might work (they call it "remote replication"). They seem pretty reasonably priced for what they are, but of course with buying 2 of them and then adding the drives (say 1TB or 2TB each) I'd be looking at about £400 total. So, I'm looking for recommendations really. I don't want to spend more than the QNAPs would cost me, but any other ideas would be most appreciated. I am comfortable with technology and tinkering around, but I don't have as much time for that as I would like, so I guess I would favour solutions that require less tinkering rather than more (even though that's less fun!). Any thoughts would be welcome, as would any comments from people who have used the QNAP boxes for this. Thanks in advance. Some specifications: Two-way syncing. Changes made at either end should be synced to the other. There shouldn't be one unit that is effectively a read-only mirror of the other. Not real time. The syncing doesn't need to be real time - if it updated, say, daily overnight that would be fine. Set and forget. I would prefer minimal user interaction once set up - it would be great if syncs were scheduled and automatic. OS independence. I am running Windows XP plus an Ubuntu-based MythTV box. At the other end there are Windows 7 and Windows XP machines, plus a networked TV set top box which I think can play files off the network. Machine independence. I would favour a system that is self-contained, i.e. not reliant on any particular PC being switched on. If the system had enough else going for it I could perhaps work around it at this end, where I only have one PC that's used as such, but it would be harder at the other where there are at least two PCs that might be accessing the files. Notifications. I guess things like getting an email notification if the syncing fell over for any reason would be useful, though it's not a deal breaker. Update I've been digging some more and it looks like QNAP's Remote Replication function is actually just Rsync, so only really suitable for one-way syncing. I've posted on their forum to double check, but I think that's the case. In which case, I think the focus of my question is now either: do any reasonably-priced NASs support bidirectional syncing over the internet?, or has anyone had any luck installing onto NASs for this purpose? (Also, updated question to clarify that I'm after two-way syncing.)

    Read the article

  • What NAS setup for syncing over the internet?

    - by Jamse
    I have family living a few hours away and have a lot of files that I would like to share - especially lots of folders of digital photos, but also documents etc. - partially so they can see them, partially so I can have access when I visit them and partially for backup / redundancy purposes. My current hard drives on my main machine are getting pretty full anyway, and I have a MythTV box where my music is currently stored, so I was thinking of getting a NAS anyway. And at the other end my family have a few computers, so they would probably benefit from a NAS too. My general idea (though I'm willing to shift on this if there are any bright ideas about other ways of achieving my objectives) is to get a matching pair of NASs and have them sync over the internet. (To cut down on bandwidth use I would get them in sync locally to start with.) Having read around as best I can it seems that syncing over the internet is generally only a feature on quite high end units. However, I have seen that QNAP seem to feature this on their TS-110 and TS-210 units, which might work (they call it "remote replication"). They seem pretty reasonably priced for what they are, but of course with buying 2 of them and then adding the drives (say 1TB or 2TB each) I'd be looking at about £400 total. So, I'm looking for recommendations really. I don't want to spend more than the QNAPs would cost me, but any other ideas would be most appreciated. I am comfortable with technology and tinkering around, but I don't have as much time for that as I would like, so I guess I would favour solutions that require less tinkering rather than more (even though that's less fun!). Any thoughts would be welcome, as would any comments from people who have used the QNAP boxes for this. Thanks in advance. Some specifications: Two-way syncing. Changes made at either end should be synced to the other. There shouldn't be one unit that is effectively a read-only mirror of the other. Not real time. The syncing doesn't need to be real time - if it updated, say, daily overnight that would be fine. Set and forget. I would prefer minimal user interaction once set up - it would be great if syncs were scheduled and automatic. OS independence. I am running Windows XP plus an Ubuntu-based MythTV box. At the other end there are Windows 7 and Windows XP machines, plus a networked TV set top box which I think can play files off the network. Machine independence. I would favour a system that is self-contained, i.e. not reliant on any particular PC being switched on. If the system had enough else going for it I could perhaps work around it at this end, where I only have one PC that's used as such, but it would be harder at the other where there are at least two PCs that might be accessing the files. Notifications. I guess things like getting an email notification if the syncing fell over for any reason would be useful, though it's not a deal breaker.

    Read the article

  • Diffence between FQL query and Graph API object access

    - by jwynveen
    What's the difference between accessing user data with the Facebook Graph API (http://graph.facebook.com/btaylor) and using the Graph API to make a FQL query of the same user (https://api.facebook.com/method/fql.query?query=QUERY). Also, does anyone know which of them the Facebook Developer Toolkit (for ASP.NET) uses? The reason I ask is because I'm trying to access the logged in user's birthday after they begin a Facebook Connect session on my site, but when I use the toolkit it doesn't return it. However, if I make a manual call to the Graph API for that user object, it does return it. It's possible I might have something wrong with my call from the toolkit. I think I may need to include the session key, but I'm not sure how to get it. Here's the code I'm using: _connectSession = new ConnectSession(APPLICATION_KEY, SECRET_KEY); try { if (!_connectSession.IsConnected()) { // Not authenticated, proceed as usual. statusResponse = "Please sign-in with Facebook."; } else { // Authenticated, create API instance _facebookAPI = new Api(_connectSession); // Load user user user = _facebookAPI.Users.GetInfo(); statusResponse = user.ToString(); ViewData["fb_user"] = user; } } catch (Exception ex) { //An error happened, so disconnect session _connectSession.Logout(); statusResponse = "Please sign-in with Facebook."; }

    Read the article

  • Reattaching an object graph to an EntityContext: "cannot track multiple objects with the same key"

    - by dkr88
    Can EF really be this bad? Maybe... Let's say I have a fully loaded, disconnected object graph that looks like this: myReport = {Report} {ReportEdit {User: "JohnDoe"}} {ReportEdit {User: "JohnDoe"}} Basically a report with 2 edits that were done by the same user. And then I do this: EntityContext.Attach(myReport); InvalidOperationException: An object with the same key already exists in the ObjectStateManager. The ObjectStateManager cannot track multiple objects with the same key. Why? Because the EF is trying to attach the {User: "JohnDoe"} entity TWICE. This will work: myReport = {Report} {ReportEdit {User: "JohnDoe"}} EntityContext.Attach(myReport); No problems here because the {User: "JohnDoe"} entity only appears in the object graph once. What's more, since you can't control how the EF attaches an entity, there is no way to stop it from attaching the entire object graph. So really if you want to reattach a complex entity that contains more than one reference to the same entity... well, good luck. At least that's how it looks to me. Any comments? UPDATE: Added sample code: // Load the report Report theReport; using (var context1 = new TestEntities()) { context1.Reports.MergeOption = MergeOption.NoTracking; theReport = (from r in context1.Reports.Include("ReportEdits.User") where r.Id == reportId select r).First(); } // theReport looks like this: // {Report[Id=1]} // {ReportEdit[Id=1] {User[Id=1,Name="John Doe"]} // {ReportEdit[Id=2] {User[Id=1,Name="John Doe"]} // Try to re-attach the report object graph using (var context2 = new TestEntities()) { context2.Attach(theReport); // InvalidOperationException }

    Read the article

  • Twitter Oauth home timeline display with php

    - by Srinivas Tamada
    $hometime= $Twitter-get_statusesHome_timeline(); Fatal error: Uncaught exception 'Exception' with message 'SimpleXMLElement::__construct() expects parameter 1 to be string <?php include 'EpiCurl.php'; include 'EpiOAuth.php'; include 'EpiTwitter.php'; include 'key.php'; $Twitter = new EpiTwitter($consumerKey, $consumerSecret); $oauthToken='xxxxxxxxxxxxxxxxxxxxxxx'; $oauthSecret='xxxxxxxxxxxxxxxxxxxxxxxxx'; // user switched pages and came back or got here directly, stilled logged in $Twitter->setToken($oauthToken,$oauthSecret); $user= $Twitter->get_accountVerify_credentials(); echo "<img src=\"{$user->profile_image_url}\">"; echo "{$user->name}"; $hometime= $Twitter->get_statusesHome_timeline(); $twitter_status = new SimpleXMLElement($hometime); foreach($twitter_status->status as $status){ echo '<div class="twitter_status">'; foreach($status->user as $user){ echo '<img src="'.$user->profile_image_url.'" class="twitter_image">'; echo '<a href="http://www.twitter.com/'.$user->name.'">'.$user->name.'</a>: '; } echo $status->text; echo '<br/>'; echo '<div class="twitter_posted_at"><strong>Posted at:</strong> '.$status->created_at.'</div>'; echo '</div>'; } ?>

    Read the article

  • Getting web page after calling DownloadStringAsync()?

    - by OverTheRainbow
    Hello I don't know enough about VB.Net yet to use the richer HttpWebRequest class, so I figured I'd use the simpler WebClient class to download web pages asynchronously (to avoid freezing the UI). However, how can the asynchronous event handler actually return the web page to the calling routine? Imports System.Net Public Class Form1 Private Shared Sub DownloadStringCallback2(ByVal sender As Object, ByVal e As DownloadStringCompletedEventArgs) If e.Cancelled = False AndAlso e.Error Is Nothing Then Dim textString As String = CStr(e.Result) 'HERE : How to return textString to the calling routine? End If End Sub Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click Dim client As WebClient = New WebClient() AddHandler client.DownloadStringCompleted, AddressOf DownloadStringCallback2 Dim uri As Uri = New Uri("http://www.google.com") client.DownloadStringAsync(uri) 'HERE : how to get web page back from callback function? End Sub End Class Thank you. Edit: I added a global, shared variable and a While/DoEvents/EndWhile, but there's got to be a cleaner way to do this :-/ Public Class Form1 Shared page As String Public Shared Sub AlertStringDownloaded(ByVal sender As Object, ByVal e As DownloadStringCompletedEventArgs) ' If the string request went as planned and wasn't cancelled: If e.Cancelled = False AndAlso e.Error Is Nothing Then page = CStr(e.Result) End If End Sub Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click Dim wc As New WebClient AddHandler wc.DownloadStringCompleted, AddressOf AlertStringDownloaded page = Nothing wc.DownloadStringAsync(New Uri("http://www.google.com")) 'Better way to wait until page has been filled? While page Is Nothing Application.DoEvents() End While RichTextBox1.Text = page End Sub End Class

    Read the article

  • Custom Event - invokation list implementation considerations

    - by M.A. Hanin
    I'm looking for some pointers on implementing Custom Events in VB.NET (Visual Studio 2008, .NET 3.5). I know that "regular" (non-custom) Events are actually Delegates, so I was thinking of using Delegates when implementing a Custom Event. On the other hand, Andrew Troelsen's "Pro VB 2008 and the .NET 3.5 Platform" book uses Collection types in all his Custom Events examples, and Microsoft's sample codes match that line of thought. So my question is: what considerations should I have when choosing one design over the other? What are the pros and cons for each design? Which of these resembles the inner-implementation of "regular" events? Below is a sample code demonstrating the two designs. Public Class SomeClass Private _SomeEventListeners As EventHandler Public Custom Event SomeEvent As EventHandler AddHandler(ByVal value As EventHandler) _SomeEventListeners = [Delegate].Combine(_SomeEventListeners, value) End AddHandler RemoveHandler(ByVal value As EventHandler) _SomeEventListeners = [Delegate].Remove(_SomeEventListeners, value) End RemoveHandler RaiseEvent(ByVal sender As Object, ByVal e As System.EventArgs) _SomeEventListeners.Invoke(sender, e) End RaiseEvent End Event Private _OtherEventListeners As New List(Of EventHandler) Public Custom Event OtherEvent As EventHandler AddHandler(ByVal value As EventHandler) _OtherEventListeners.Add(value) End AddHandler RemoveHandler(ByVal value As EventHandler) _OtherEventListeners.Remove(value) End RemoveHandler RaiseEvent(ByVal sender As Object, ByVal e As System.EventArgs) For Each handler In _OtherEventListeners handler(sender, e) Next End RaiseEvent End Event End Class

    Read the article

  • Unit Testing using InternalsVisibleToAttribute requires compiling with /out:filename.ext?

    - by Will Marcouiller
    In my most recent question: Unit Testing Best Practice? / C# InternalsVisibleTo() attribute for VBNET 2.0 while testing?, I was asking about InternalsVisibleToAttribute. I have read the documentation on how to use it, and everything is fine and understood. However, I can't instantiate my class Groupe from my Testing project. I want to be able to instantiate my internal class in my wrapper assembly, from my testing assembly. Any help is appreciated! EDIT #1 Here's the compile-time error I get when I do try to instantiate my type: Erreur 2 'Carra.Exemples.Blocs.ActiveDirectory.Groupe' n'est pas accessible dans ce contexte, car il est 'Private'. C:\Open\Projects\Exemples\Src\Carra.Exemples.Blocs.ActiveDirectory\Carra.Exemples.Blocs.ActiveDirectory.Tests\GroupeTests.vb 9 18 Carra.Exemples.Blocs.ActiveDirectory.Tests (This says that my type is not accessible in this context, because it is private.) But it's Friend (internal)! EDIT #2 Here's a piece of code as suggested for the Groupe class implementing the Public interface IGroupe: #Region "Importations" Imports System.DirectoryServices Imports System.Runtime.CompilerServices #End Region <Assembly: InternalsVisibleTo("Carra.Exemples.Blocs.ActiveDirectory.Tests")> Friend Class Groupe Implements IGroupe #Region "Membres privés" Private _classe As String = "group" Private _domaine As String Private _membres As CustomSet(Of IUtilisateur) Private _groupeNatif As DirectoryEntry #End Region #Region "Constructeurs" Friend Sub New() _membres = New CustomSet(Of IUtilisateur)() _groupeNatif = New DirectoryEntry() End Sub Friend Sub New(ByVal domaine As String) If (String.IsNullOrEmpty(domaine)) Then Throw New ArgumentNullException() _domaine = domaine _membres = New CustomSet(Of IUtilisateur)() _groupeNatif = New DirectoryEntry(domaine) End Sub Friend Sub New(ByVal groupeNatif As DirectoryEntry) _groupeNatif = groupeNatif _domaine = _groupeNatif.Path _membres = New CustomSet(Of IUtilisateur)() End Sub #End Region And the code trying to use it: #Region "Importations" Imports NUnit.Framework Imports Carra.Exemples.Blocs.ActiveDirectory.Tests #End Region <TestFixture()> _ Public Class GroupeTests <Test()> _ Public Sub CreerDefaut() Dim g As Groupe = New Groupe() Assert.IsNotNull(g) Assert.IsInstanceOf(Groupe, g) End Sub End Class EDIT #3 Damn! I have just noticed that I wasn't importing the assembly in my importation region. Nope, didn't solve anything =( Thanks!

    Read the article

  • Using Lambda Expressions trees with IEnumerable

    - by Loathian
    I've been trying to learn more about using Lamba expression trees and so I created a simple example. Here is the code, this works in LINQPad if pasted in as a C# program. void Main() { IEnumerable<User> list = GetUsers().Where(NameContains("a")); list.Dump("Users"); } // Methods public IEnumerable<User> GetUsers() { yield return new User{Name = "andrew"}; yield return new User{Name = "rob"}; yield return new User{Name = "chris"}; yield return new User{Name = "ryan"}; } public Expression<Func<User, bool>> NameContains(string namePart) { return u => u.Name.Contains(namePart); } // Classes public class User { public string Name { get; set; } } This results in the following error: The type arguments for method 'System.Linq.Enumerable.Where(System.Collections.Generic.IEnumerable, System.Func)' cannot be inferred from the usage. Try specifying the type arguments explicitly. However if I just substitute the first line in main with this: IEnumerable<User> list = GetUsers().Where(u => u.Name.Contains("a")); It works fine. Can tell me what I'm doing wrong, please?

    Read the article

  • TRibbon's large button's image is not centered...any ideas? easy to demonstrate at design-time.

    - by X-Ray
    i'm using delphi 2009 (updates 1, 2, 3, 4). i'm seeing something quite peculiar. the image on the button is not centered in the button when i have a large button with a large glyph! rather than being centered, the left part of the glyph starts at the center of the button. a clue is that when i: go into the action editor and select the action use the ImageIndex combobox in the object inspector, the list is empty (normally i'd see the available images in the combobox). it seems as though there's an image width property i've failed to set or an imagelist not correctly configured. i've expected the glyph on a large button should be 32x32. try the following: paste these components into an empty form add a 32x32 image to the image list set the Action1 imageindex to 0 you'll immediately see what i mean! can anyone tell me why it looks that way? i find it interesting that the ribbon demo app doesn't show this problem. i even tried the same image. thank you! object ActionManager1: TActionManager ActionBars = < item Items = < item Action = Action1 Caption = '&Action1' ImageIndex = 0 CommandProperties.ButtonSize = bsLarge end> ActionBar = RibbonGroup1 end> LargeDisabledImages = img3232 LargeImages = img3232 Left = 376 Top = 184 StyleName = 'Ribbon - Luna' object Action1: TAction Caption = 'Action1' ImageIndex = 0 end end object Ribbon1: TRibbon Left = 0 Top = 0 Width = 693 Height = 147 ActionManager = ActionManager1 Caption = 'Ribbon1' Tabs = < item Caption = 'RibbonPage1' Page = RibbonPage1 end> ExplicitLeft = 232 ExplicitTop = 80 ExplicitWidth = 0 DesignSize = ( 693 147) StyleName = 'Ribbon - Luna' object RibbonPage1: TRibbonPage Left = 0 Top = 54 Width = 692 Height = 93 Caption = 'RibbonPage1' Index = 0 object RibbonGroup1: TRibbonGroup Left = 4 Top = 3 Width = 54 Height = 86 ActionManager = ActionManager1 Caption = 'RibbonGroup1' GroupIndex = 0 end end end object img3232: TImageList Height = 32 Width = 32 Left = 376 Top = 256 end

    Read the article

  • "no block given" errors with cache_money

    - by emh
    i've inherited a site that in production is generating dozens of "no block given" exceptions every 5 minutes. the top of the stack trace is: vendor/gems/nkallen-cache-money-0.2.5/lib/cash/accessor.rb:42:in `add' vendor/gems/nkallen-cache-money-0.2.5/lib/cash/accessor.rb:33:in `get' vendor/gems/nkallen-cache-money-0.2.5/lib/cash/accessor.rb:22:in `call' vendor/gems/nkallen-cache-money-0.2.5/lib/cash/accessor.rb:22:in `fetch' vendor/gems/nkallen-cache-money-0.2.5/lib/cash/accessor.rb:31:in `get' so it appears that the problem is in the cache money plugin. has anyone experienced something similar? i've cut and pasted the relevant code below -- anyone more familiar with blocks able to discern any obvious problems? 11 def fetch(keys, options = {}, &block) 12 case keys 13 when Array 14 keys = keys.collect { |key| cache_key(key) } 15 hits = repository.get_multi(keys) 16 if (missed_keys = keys - hits.keys).any? 17 missed_values = block.call(missed_keys) 18 hits.merge!(missed_keys.zip(Array(missed_values)).to_hash) 19 end 20 hits 21 else 22 repository.get(cache_key(keys), options[:raw]) || (block ? block.call : nil) 23 end 24 end 25 26 def get(keys, options = {}, &block) 27 case keys 28 when Array 29 fetch(keys, options, &block) 30 else 31 fetch(keys, options) do 32 if block_given? 33 add(keys, result = yield(keys), options) 34 result 35 end 36 end 37 end 38 end 39 40 def add(key, value, options = {}) 41 if repository.add(cache_key(key), value, options[:ttl] || 0, options[:raw]) == "NOT_STORED\r\n" 42 yield 43 end 44 end

    Read the article

  • Problem between Glassfish and Spring Security Basic Authentication

    - by Raspayu
    Hi! I am enabling a simple HTTP Basic Authentication with Spring security in my project. My environment is an Glassfish Server (bundled with Netbeans), and almost everything works perfect: I have set up it to just ask for authentication with the POST method, with hardcoded users with "user-service", and it works with user names with no special characters. The problem comes when I set up an user with "@" or "." Here is the spring-security related part of my servlet.xml: <security:http> <security:intercept-url method="POST" pattern="/**" access="ROLE_USER" /> <security:http-basic/> </security:http> <security:authentication-manager alias="authenticationManager"> <security:authentication-provider user-service-ref="uservice"/> </security:authentication-manager> <security:user-service id="uservice"> <security:user name="[email protected]" password="pswd1" authorities="ROLE_USER" /> <security:user name="[email protected]" password="pswd2" authorities="ROLE_USER" /> <security:user name="pepe" password="pepito" authorities="ROLE_USER" /> </security:user-service> I have looked also for what did the browser send to the listening port, and it sends right the par "username:password" in base 64, so i think the problem is in my server(Glassfish v3). Does anyone have any idea? Thanks in advance! Raspayu

    Read the article

  • How to manage a One-To-One and a One-To-Many of same type as unidirectional mapping?

    - by user1652438
    I'm trying to implement a model for private messages between two or more users. That means I've got two Entities: User PrivateMessage The User model shouldn't be edited, so I'm trying to set up an unidirectional relationship: @Entity (name = "User") @Table (name = "user") public class User implements Serializable { @Id String username; String password; // something like this ... } The PrivateMessage Model addresses multiple receivers and has exactly one sender. So I need something like this: @Entity (name = "PrivateMessage") @Table (name = "privateMessage") @XmlRootElement @XmlType (propOrder = {"id", "sender", "receivers", "title", "text", "date", "read"}) public class PrivateMessage implements Serializable { private static final long serialVersionUID = -9126766942868177246L; @Id @GeneratedValue private Long id; @NotNull private String title; @NotNull private String text; @NotNull @Temporal(TemporalType.TIMESTAMP) private Date date; @NotNull private boolean read; @NotNull @ElementCollection(fetch = FetchType.EAGER, targetClass = User.class) private Set<User> receivers; @NotNull @OneToOne private User sender; // and so on } The according 'privateMessage' table won't be generated and just the relationship between the PM and the many receivers is satisfied. I'm confused about this. Everytime I try to set a 'mappedBy' attribute, my IDE marks it as an error. It seems to be a problem that the User-entity isn't aware of the private message which maps it. What am I doing wrong here? I've solved some situation similar to this one, but none of those solutions will work here. Thanks in advance!

    Read the article

  • ASP Classic, SQL 2008, XML Output, and DSN vs DSN-Less Produces Chinese Letters

    - by RoLYroLLs
    I've been having a problem for the past month and can't seem to figure out what is wrong. Here's the setup and a little background. Background: I have a web-host who was running my website on Windows Server 2003 and SQL Server 2000. One of my webpages returned a result set from a stored procedure from the SQL server as xml. Below is the code: Stored Procedure: select top 10 1 as tag , null as parent , column1 as [item!1!column1!element] , column2 as [item!1!column2!element] from table1 for XML EXPLICIT ASP Page: index.asp Call OpenConn Set cmd = Server.CreateObject("ADODB.Command") With cmd .ActiveConnection = dbc .CommandText = "name of proc" .CommandType = adCmdStoredProc .Parameters.Append .CreateParameter("@RetVal", adInteger, adParamReturnValue, 4) .Parameters.Append .CreateParameter("@Level", adInteger, adParamInput, 4, Level) End With Set rsItems = Server.CreateObject("ADODB.Recordset") With rsItems .CursorLocation = adUseClient .CursorType = adOpenStatic .LockType = adLockBatchOptimistic Set .Source = cmd .Open Set .ActiveConnection = Nothing End With If NOT rsItems.BOF AND NOT rsItems.EOF Then OutputXMLQueryResults rsItems,"items" End If Set rsItems = Nothing Set cmd = Nothing Call CloseConn Sub OpenConn() strConn = "Provider=SQLOLEDB;Data Source=[hidden];User Id=[hidden];Password=[hidden];Initial Catalog=[hidden];" Set dbc = Server.CreateObject("ADODB.Connection") dbc.open strConn End Sub Sub CloseConn() If IsObject(dbc) Then If dbc.State = adStateOpen Then dbc.Close End If Set dbc = Nothing End If End Sub Sub OutputXMLQueryResults(RS,RootElementName) Response.Clear Response.ContentType = "text/xml" Response.Codepage = 65001 Response.Charset = "utf-8" Response.Write "" Response.Write "" While Not RS.EOF Response.Write RS(0).Value RS.MoveNext WEnd Response.Write "" Response.End End Sub Present: All was working great, until my host upgraded to Windows Server 2008 and SQL Server 2008. All of the sudden I was getting results like this: From Browser: From View Source: However, I found that if I use a DSN connection strConn = "DSN=[my DSN Name];User Id=[hidden];Password=[hidden];Initial Catalog=[hidden];" it works perfectly fine! My current host is not going to support DSN any longer, but that's out of scope for this issue. Someone told me to use an ADO.Stream object instead of a Recordset object, but I'm unsure how to implement that. Question: Has anyone run into this and found a way to fix it? What about that ADO.Stream object, can someone help me with a sample that would fit my code?

    Read the article

  • sql 2008 sqldmo alternative

    - by alexdelpiero
    Hi! I previously was using sqldmo to automatically generate scripts from the databse. Now I upgraded to sql server 2008 and I don’t want to use this feature anymore since Microsoft will be dropping this feature off. Is there any other alternative I can use to connect to a server and generate scripts automatically from a database? Any answer is welcome. Thanks in advance. This is the procedure i was previously using: CREATE PROC GenerateSP ( @server varchar(30) = null, @uname varchar(30) = null, @pwd varchar(30) = null, @dbname varchar(30) = null, @filename varchar(200) = 'c:\script.sql' ) AS DECLARE @object int DECLARE @hr int DECLARE @return varchar(200) DECLARE @exec_str varchar(2000) DECLARE @spname sysname SET NOCOUNT ON -- Sets the server to the local server IF @server is NULL SELECT @server = @@servername -- Sets the database to the current database IF @dbname is NULL SELECT @dbname = db_name() -- Sets the username to the current user name IF @uname is NULL SELECT @uname = SYSTEM_USER -- Create an object that points to the SQL Server EXEC @hr = sp_OACreate 'SQLDMO.SQLServer', @object OUT IF @hr < 0 BEGIN PRINT 'error create SQLOLE.SQLServer' RETURN END -- Connect to the SQL Server IF @pwd is NULL BEGIN EXEC @hr = sp_OAMethod @object, 'Connect', NULL, @server, @uname IF @hr < 0 BEGIN PRINT 'error Connect' RETURN END END ELSE BEGIN EXEC @hr = sp_OAMethod @object, 'Connect', NULL, @server, @uname, @pwd IF @hr < 0 BEGIN PRINT 'error Connect' RETURN END END --Verify the connection EXEC @hr = sp_OAMethod @object, 'VerifyConnection', @return OUT IF @hr < 0 BEGIN PRINT 'error VerifyConnection' RETURN END SET @exec_str = 'DECLARE script_cursor CURSOR FOR SELECT name FROM ' + @dbname + '..sysobjects WHERE type = ''P'' ORDER BY Name' EXEC (@exec_str) OPEN script_cursor FETCH NEXT FROM script_cursor INTO @spname WHILE (@@fetch_status < -1) BEGIN SET @exec_str = 'Databases("'+ @dbname +'").StoredProcedures("'+RTRIM(UPPER(@spname))+'").Script(74077,"'+ @filename +'")' EXEC @hr = sp_OAMethod @object, @exec_str, @return OUT IF @hr < 0 BEGIN PRINT 'error Script' RETURN END FETCH NEXT FROM script_cursor INTO @spname END CLOSE script_cursor DEALLOCATE script_cursor -- Destroy the object EXEC @hr = sp_OADestroy @object IF @hr < 0 BEGIN PRINT 'error destroy object' RETURN END GO

    Read the article

  • Adobe AIR non-Administrator application installation/upgrade on Windows

    - by bzlm
    Is there any way to allow non-Administrator users to install, upgrade or uninstall an Adobe AIR application on Windows? I've made an Adobe AIR application and packaged it as a .air package using the standard AIR mechanism for creating deployment packages. If a normal or Power user tries to install this AIR application, the Application Event Log shows an error saying administrative rights are required. And even if the user elevates during installation, administrative rights are still required for an upgrade using the automated AIR upgrade system (since an upgrade is essentially, behind the scenes, an uninstallation of a .msi package followed by an installation of another .msi package). Is there any way around this? What I've tried so far is: Using the Group Policy editor, setting Windows Installer to elevate during installations. Doesn't work, since AIR attempts a "for all users" installation. Specifying My Documents as the installation directory. Doesn't work, since AIR attempts a "for all users" installation. Giving the user Modify access to the Program Files folder where the application would usually reside. Doesn't work, since this isn't a file permissions issue. Making the user a Power User. Doesn't work, since AIR attempts a "for all users" installation. I'm guessing that both installing and upgrading would work fine for a user if the AIR installer would attempt to make an "only for me" application installation instead of a "for all users" installation, and the user was a Power User, and possibly the application was installed to My Documents I'm also guessing that this problem doesn't exist on OSX and Linux, since they have more intuitive concepts for per-user application installations.

    Read the article

  • SQL Server 2008 alternative for SQL-DMO

    - by alexdelpiero
    Hi! I previously was using SQL-DMO to automatically generate scripts from the database. Now I upgraded to SQL Server 2008 and I don’t want to use this feature anymore since Microsoft will be dropping this feature off. Is there any other alternative I can use to connect to a server and generate scripts automatically from a database? Any answer is welcome. Thanks in advance. This is the procedure i was previously using: CREATE PROC GenerateSP ( @server varchar(30) = null, @uname varchar(30) = null, @pwd varchar(30) = null, @dbname varchar(30) = null, @filename varchar(200) = 'c:\script.sql' ) AS DECLARE @object int DECLARE @hr int DECLARE @return varchar(200) DECLARE @exec_str varchar(2000) DECLARE @spname sysname SET NOCOUNT ON -- Sets the server to the local server IF @server is NULL SELECT @server = @@servername -- Sets the database to the current database IF @dbname is NULL SELECT @dbname = db_name() -- Sets the username to the current user name IF @uname is NULL SELECT @uname = SYSTEM_USER -- Create an object that points to the SQL Server EXEC @hr = sp_OACreate 'SQLDMO.SQLServer', @object OUT IF @hr <> 0 BEGIN PRINT 'error create SQLOLE.SQLServer' RETURN END -- Connect to the SQL Server IF @pwd is NULL BEGIN EXEC @hr = sp_OAMethod @object, 'Connect', NULL, @server, @uname IF @hr <> 0 BEGIN PRINT 'error Connect' RETURN END END ELSE BEGIN EXEC @hr = sp_OAMethod @object, 'Connect', NULL, @server, @uname, @pwd IF @hr <> 0 BEGIN PRINT 'error Connect' RETURN END END --Verify the connection EXEC @hr = sp_OAMethod @object, 'VerifyConnection', @return OUT IF @hr <> 0 BEGIN PRINT 'error VerifyConnection' RETURN END SET @exec_str = 'DECLARE script_cursor CURSOR FOR SELECT name FROM ' + @dbname + '..sysobjects WHERE type = ''P'' ORDER BY Name' EXEC (@exec_str) OPEN script_cursor FETCH NEXT FROM script_cursor INTO @spname WHILE (@@fetch_status <> -1) BEGIN SET @exec_str = 'Databases("'+ @dbname +'").StoredProcedures("'+RTRIM(UPPER(@spname))+'").Script(74077,"'+ @filename +'")' EXEC @hr = sp_OAMethod @object, @exec_str, @return OUT IF @hr <> 0 BEGIN PRINT 'error Script' RETURN END FETCH NEXT FROM script_cursor INTO @spname END CLOSE script_cursor DEALLOCATE script_cursor -- Destroy the object EXEC @hr = sp_OADestroy @object IF @hr <> 0 BEGIN PRINT 'error destroy object' RETURN END GO

    Read the article

  • Java split xml file

    - by CC
    Hi all, I'm working on a piece of code to split files. I want to split flat file (that's ok, it is working fine) and xml file. The idea is to split based of a number of files to split: I have a file, and I want to split it in x files (x is a parameters). I'm doing the split by taking the size of the file and spliting the size by the number of files to split. Then, mysolution was to use a BufferedReader and to use it like while ((n = reader.read(buffer, 0, buffer.length)) != -1) { { The main problem is that for the xml file I cannot just split it, but I have to split it based on a block delimited by a start xml tag and end xml tag: <start tag> bla bla xml stuff </end tag> So I cannot cut a block at the middle. So if when I'm at the half of a block, is the size of my new file is greater than my max, I will have to read until the end of the tag, and then, to start a next file. The problem is that I have all sort of cases, and is a bit difficult to search the end tag. - the block reads a text until the middle of the end tag - the block reads a text until the end of the end tag, and no more other caracter after - etc and in the same time to have a loop and read the next block. Some times the end of a block concatenated with the start of the next one, I have the end xml tag. I hope you get the idea. My question is, does anyone have some algorithm that does that more accurate and who i treating all special cases ? The idea is to split the file as quickly as possible. Thanks alot.

    Read the article

  • Should this even be a has_many :through association?

    - by GoodGets
    A Post belongs_to a User, and a User has_many Posts. A Post also belongs_to a Topic, and a Topic has_many Posts. class User < ActiveRecord::Base has_many :posts end class Topic < ActiveRecord::Base has_many :posts end class Post < ActiveRecord::Base belongs_to :user belongs_to :topic end Well, that's pretty simple and very easy to set up, but when I display a Topic, I not only want all of the Posts for that Topic, but also the user_name and the user_photo of the User that made that Post. However, those attributes are stored in the User model and not tied to the Topic. So how would I go about setting that up? Maybe it can already be called since the Post model has two foreign keys, one for the User and one for the Topic? Or, maybe this is some sort of "one-way" has_many through assiociation. Like the Post would be the join model, and a Topic would has_many :users, :through = :posts. But the reverse of this is not true. Like a User does NOT has_many :topics. So would this even need to be has_many :though association? I guess I'm just a little confused on what the controller would look like to call both the Post and the User of that Post for a give Topic. Edit: Seriously, thank you to all that weighed in. I chose tal's answer because I used his code for my controller; however, I could have just as easily chosen either j.'s or tim's instead. Thank you both as well. This was so damn simple to implement, and I think today marks the day that I'm beginning to fall in love with rails.

    Read the article

  • Using ReadOnlyCollection preventing me from setting up a bi-directional many-to-many relationship

    - by Kevin Pang
    I'm using NHibernate to persist a many-to-many relation between Users and Networks. I've set up both the User and Network class as follows, exposing each's collections as ReadOnlyCollections to prevent direct access to the underlying lists. I'm trying to make sure that the only way a User can be added to a Network is by using its "JoinNetwork" function. However, I can't seem to figure out how to add the User to the Network's list of users since its collection is readonly. public class User { private ISet<Network> _Networks = new HashedSet<Network>(); public ReadOnlyCollection<Network> Networks { get { return new List<Network>(_Networks).AsReadOnly(); } } public void JoinNetwork(Network network) { _Networks.Add(network); // How do I add the current user to the Network's list of users? } } public class Network { private ISet<User> _Users = new HashedSet<User>(); public ReadOnlyCollection<User> Users { get { return new List<User>(_Users).AsReadOnly(); } } }

    Read the article

  • Rijndael managed: plaintext length detction

    - by sheepsimulator
    I am spending some time learning how to use the RijndaelManaged library in .NET, and developed the following function to test encrypting text with slight modifications from the MSDN library: Function encryptBytesToBytes_AES(ByVal plainText As Byte(), ByVal Key() As Byte, ByVal IV() As Byte) As Byte() ' Check arguments. If plainText Is Nothing OrElse plainText.Length <= 0 Then Throw New ArgumentNullException("plainText") End If If Key Is Nothing OrElse Key.Length <= 0 Then Throw New ArgumentNullException("Key") End If If IV Is Nothing OrElse IV.Length <= 0 Then Throw New ArgumentNullException("IV") End If ' Declare the RijndaelManaged object ' used to encrypt the data. Dim aesAlg As RijndaelManaged = Nothing ' Declare the stream used to encrypt to an in memory ' array of bytes. Dim msEncrypt As MemoryStream = Nothing Try ' Create a RijndaelManaged object ' with the specified key and IV. aesAlg = New RijndaelManaged() aesAlg.BlockSize = 128 aesAlg.KeySize = 128 aesAlg.Mode = CipherMode.ECB aesAlg.Padding = PaddingMode.None aesAlg.Key = Key aesAlg.IV = IV ' Create a decrytor to perform the stream transform. Dim encryptor As ICryptoTransform = aesAlg.CreateEncryptor(aesAlg.Key, aesAlg.IV) ' Create the streams used for encryption. msEncrypt = New MemoryStream() Using csEncrypt As New CryptoStream(msEncrypt, encryptor, CryptoStreamMode.Write) Using swEncrypt As New StreamWriter(csEncrypt) 'Write all data to the stream. swEncrypt.Write(plainText) End Using End Using Finally ' Clear the RijndaelManaged object. If Not (aesAlg Is Nothing) Then aesAlg.Clear() End If End Try ' Return the encrypted bytes from the memory stream. Return msEncrypt.ToArray() End Function Here's the actual code I am calling encryptBytesToBytes_AES() with: Private Sub btnEncrypt_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnEncrypt.Click Dim bZeroKey As Byte() = {&H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0} PrintBytesToRTF(encryptBytesToBytes_AES(bZeroKey, bZeroKey, bZeroKey)) End Sub However, I get an exception thrown on swEncrypt.Write(plainText) stating that the 'Length of the data to encrypt is invalid.' However, I know that the size of my key, iv, and plaintext are 16 bytes == 128 bits == aesAlg.BlockSize. Why is it throwing this exception? Is it because the StreamWriter is trying to make a String (ostensibly with some encoding) and it doesn't like &H0 as a value?

    Read the article

< Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >