Search Results

Search found 64711 results on 2589 pages for 'core data'.

Page 711/2589 | < Previous Page | 707 708 709 710 711 712 713 714 715 716 717 718  | Next Page >

  • Drawing lines in 3D space

    - by DeadMG
    When attempting to draw a line in 3D space with D3DPT_LINELIST, then Direct3D gives me an error about an invalid vertex declaration, saying that it cannot be converted to an FVF. I am using the same vertex declaration and shader/stream setup as for my D3DPT_TRIANGLELIST rendering which works absolutely correctly. How can I use D3DPT_LINELIST to render some lines in 3D space? Edit: Oopsie, forgot my codeses. Here's my raw Draw call. D3DCALL(device->SetStreamSource(1, PerBoneBuffer.get(), 0, sizeof(PerInstanceData))); D3DCALL(device->SetStreamSourceFreq(1, D3DSTREAMSOURCE_INSTANCEDATA | 1)); D3DCALL(device->SetStreamSource(0, LineVerts, 0, sizeof(D3DXVECTOR3))); D3DCALL(device->SetStreamSourceFreq(0, D3DSTREAMSOURCE_INDEXEDDATA | lines.size())); D3DCALL(device->SetIndices(LineIndices)); PerInstanceData* data; std::vector<Wide::Render::Line*> lines_vec(lines.begin(), lines.end()); D3DCALL(PerBoneBuffer->Lock(0, lines.size() * sizeof(PerInstanceData), reinterpret_cast<void**>(&data), D3DLOCK_DISCARD)); std::for_each(lines.begin(), lines.end(), [&](Wide::Render::Line* ptr) { data->Color = D3DXColor(ptr->Colour); D3DXMATRIXA16 Translate, Scale, Rotate; D3DXMatrixTranslation(&Translate, ptr->Start.x, ptr->Start.y, ptr->Start.z); D3DXMatrixScaling(&Scale, ptr->Scale, 1, 1); D3DXMatrixRotationQuaternion(&Rotate, &D3DQuaternion(ptr->Rotation)); data->World = Scale * Rotate * Translate; }); D3DCALL(PerBoneBuffer->Unlock()); D3DCALL(device->DrawIndexedPrimitive(D3DPRIMITIVETYPE::D3DPT_LINELIST, 0, 0, 2, 0, 1)); Here's my vertex declaration: D3DVERTEXELEMENT9 BasicMeshVertices[] = { {0, 0, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_POSITION, 0}, {1, 0, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 0}, {1, 16, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 1}, {1, 32, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 2}, {1, 48, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 3}, {1, 64, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_COLOR, 0}, D3DDECL_END() }; The LineIndices are just 0, 1 and the LineVerts are just {0,0,0} and {1,0,0}, to represent a unit 3D line along the X axis.

    Read the article

  • Dynamic/Adaptive RLE

    - by Lucius
    So, I'm developing a 2D, tile based game and a map maker thingy - all in Java. The problem is that recently I've been having some memory issues when about 4 maps are loaded. Each one of these maps are composed of 128x128 tiles and have 4 layers (for details and stuff). I already spent a good amount of time searching for solutions and the best thing I found was run-length enconding (RLE). It seems easy enough to use with static data, but is there a way to use it with data that is constantly changing, without a big drop in performance? In my maps, supposing I'm compressing the columns, I would have 128 rows, each with some amount of data (hopefully less than it would be without RLE). Whenever I change a tile, that whole row would have to be checked and I'm affraid that would slow down too much the production (and I'm in a somewhat tight schedule). Well, worst case scenario I work on each map individually, and save them using RLE, but it would be really nice if I could avoind that. EDIT: What I'm currently using to store the data for the tiles is a 2D array of HashMaps that use the layer as key and store the id of the tile in that position - like this: private HashMap< Integer, Integer [][]

    Read the article

  • "Size mismatch" apt error when installing openJDK

    - by siddanth
    when i try install openjdk-7-jre-headless i am getting the following error: Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: ca-certificates-java icedtea-7-jre-jamvm java-common libcups2 libjpeg62 liblcms2-2 libnspr4 libnss3 libnss3-1d openjdk-7-jre-lib tzdata tzdata-java Suggested packages: default-jre equivs cups-common liblcms2-utils libnss-mdns sun-java6-fonts ttf-dejavu-extra ttf-baekmuk ttf-unfonts ttf-unfonts-core ttf-sazanami-gothic ttf-kochi-gothic ttf-sazanami-mincho ttf-kochi-mincho ttf-wqy-microhei ttf-wqy-zenhei ttf-indic-fonts-core ttf-telugu-fonts ttf-oriya-fonts ttf-kannada-fonts ttf-bengali-fonts The following NEW packages will be installed: ca-certificates-java icedtea-7-jre-jamvm java-common libcups2 libjpeg62 liblcms2-2 libnspr4 libnss3 libnss3-1d openjdk-7-jre-headless openjdk-7-jre-lib tzdata-java The following packages will be upgraded: tzdata 1 upgraded, 12 newly installed, 0 to remove and 122 not upgraded. Need to get 41.2 MB/43.5 MB of archives. After this operation, 64.0 MB of additional disk space will be used. Get:5 http://in.archive.ubuntu.com/ubuntu/ oneiric/main java-common all 0.42ubuntu2 [62.4 kB] Fetched 41.1 MB in 4min 5s (167 kB/s) Failed to fetch http://in.archive.ubuntu.com/ubuntu/pool/main/j/java-common/java-common_0.42ubuntu2_all.deb Size mismatch E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? am unable to solve this. Am i missing something? please help me out in solving this.

    Read the article

  • SQL SERVER – Follow up – Usage of $rowguid and $IDENTITY

    - by pinaldave
    The most common question I often receive is why do I blog? The answer is even simpler – I blog because I get an extremely constructive comment and conversation from people like DHall and Kumar Harsh. Earlier this week, I shared a conversation between Madhivanan and myself regarding how to find out if a table uses ROWGUID or not? I encourage all of you to read the conversation here: SQL SERVER – Identifying Column Data Type of uniqueidentifier without Querying System Tables. In simple words the conversation between Madhivanan and myself brought out a simple query which returns the values of the UNIQUEIDENTIFIER  without knowing the name of the column. David Hall wrote few excellent comments as a follow up and every SQL Enthusiast must read them first, second and third. David is always with positive energy, he first of all shows the limitation of my solution here and here which he follows up with his own solution here. As he said his solution is also not perfect but it indeed leaves learning bites for all of us – worth reading if you are interested in unorthodox solutions. Kumar Harsh suggested that one can also find Identity Column used in the table very similar way using $IDENTITY. Here is how one can do the same. DECLARE @t TABLE ( GuidCol UNIQUEIDENTIFIER DEFAULT newsequentialid() ROWGUIDCOL, IDENTITYCL INT IDENTITY(1,1), data VARCHAR(60) ) INSERT INTO @t (data) SELECT 'test' INSERT INTO @t (data) SELECT 'test1' SELECT $rowguid,$IDENTITY FROM @t There are alternate ways also to find an identity column in the database as well. Following query will give a list of all column names with their corresponding tablename. SELECT SCHEMA_NAME(so.schema_id) SchemaName, so.name TableName, sc.name ColumnName FROM sys.objects so INNER JOIN sys.columns sc ON so.OBJECT_ID = sc.OBJECT_ID AND sc.is_identity = 1 Let me know if you use any alternate method related to identity, I would like to know what you do and how you do when you have to deal with Identity Column. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • SQLAuthority News – Microsoft Whitepaper – AlwaysOn Solution Guide: Offloading Read-Only Workloads to Secondary Replicas

    - by pinaldave
    SQL Server 2012 has many interesting features but the most talked feature is AlwaysOn. Performance tuning is always a hot topic. I see lots of need of the same and lots of business around it. However, many times when people talk about performance tuning they think of it as a either query tuning, performance tuning, or server tuning. All are valid points, but performance tuning expert usually understands the business workload and business logic before making suggestions. For example, if performance tuning expert analysis workload and realize that there are plenty of reports as well read only queries on the server they can for sure consider alternate options for the same. If read only data is not required real time or it can accept the data which is delayed a bit it makes sense to divide the workload. A secondary replica of the original data which can serve all the read only queries and report is a good idea in most of the cases where there is plenty of workload which is not dependent on the real time data. SQL Server 2012 has introduced the feature of AlwaysOn which can very well fit in this scenario and provide a solution in Read-Only Workloads. Microsoft has recently announced a white paper which is based on absolutely the same subject. I recommend it to read for every SQL Enthusiast who is are going to implement a solution to offload read-only workloads to secondary replicas. Download white paper AlwaysOn Solution Guide: Offloading Read-Only Workloads to Secondary Replicas Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: AlwaysOn

    Read the article

  • How do I prevent missing network from slowing down boot-up?

    - by Ravi S Ghosh
    I have been having rather slow boot on Ubuntu 12.04. Lately, I tried to figure out the reason and it seems to be the network connection which does not get connected and requires multiple attempts. Here is part of dmesg [ 2.174349] EXT4-fs (sda2): INFO: recovery required on readonly filesystem [ 2.174352] EXT4-fs (sda2): write access will be enabled during recovery [ 2.308172] firewire_core: created device fw0: GUID 384fc00005198d58, S400 [ 2.333457] usb 7-1.2: new low-speed USB device number 3 using uhci_hcd [ 2.465896] EXT4-fs (sda2): recovery complete [ 2.466406] EXT4-fs (sda2): mounted filesystem with ordered data mode. Opts: (null) [ 2.589440] usb 7-1.3: new low-speed USB device number 4 using uhci_hcd **[ 18.292029] ADDRCONF(NETDEV_UP): eth0: link is not ready** [ 18.458958] udevd[377]: starting version 175 [ 18.639482] Adding 4200960k swap on /dev/sda5. Priority:-1 extents:1 across:4200960k [ 19.314127] wmi: Mapper loaded [ 19.426602] r592 0000:09:01.2: PCI INT B -> GSI 18 (level, low) -> IRQ 18 [ 19.426739] r592: driver successfully loaded [ 19.460105] input: Dell WMI hotkeys as /devices/virtual/input/input5 [ 19.493629] lp: driver loaded but no devices found [ 19.497012] cfg80211: Calling CRDA to update world regulatory domain [ 19.535523] ACPI Warning: _BQC returned an invalid level (20110623/video-480) [ 19.539457] acpi device:03: registered as cooling_device2 [ 19.539520] input: Video Bus as /devices/LNXSYSTM:00/device:00/PNP0A08:00/device:01/LNXVIDEO:00/input/input6 [ 19.539568] ACPI: Video Device [M86] (multi-head: yes rom: no post: no) [ 19.578060] Linux video capture interface: v2.00 [ 19.667708] dcdbas dcdbas: Dell Systems Management Base Driver (version 5.6.0-3.2) [ 19.763171] r852 0000:09:01.3: PCI INT B -> GSI 18 (level, low) -> IRQ 18 [ 19.763258] r852: driver loaded successfully [ 19.854769] input: Microsoft Comfort Curve Keyboard 2000 as /devices/pci0000:00/0000:00:1d.1/usb7/7-1/7-1.2/7-1.2:1.0/input/input7 [ 19.854864] generic-usb 0003:045E:00DD.0001: input,hidraw0: USB HID v1.11 Keyboard [Microsoft Comfort Curve Keyboard 2000] on usb-0000:00:1d.1-1.2/input0 [ 19.878605] input: Microsoft Comfort Curve Keyboard 2000 as /devices/pci0000:00/0000:00:1d.1/usb7/7-1/7-1.2/7-1.2:1.1/input/input8 [ 19.878698] generic-usb 0003:045E:00DD.0002: input,hidraw1: USB HID v1.11 Device [Microsoft Comfort Curve Keyboard 2000] on usb-0000:00:1d.1-1.2/input1 [ 19.902779] input: DELL DELL USB Laser Mouse as /devices/pci0000:00/0000:00:1d.1/usb7/7-1/7-1.3/7-1.3:1.0/input/input9 [ 19.925034] generic-usb 0003:046D:C063.0003: input,hidraw2: USB HID v1.10 Mouse [DELL DELL USB Laser Mouse] on usb-0000:00:1d.1-1.3/input0 [ 19.925057] usbcore: registered new interface driver usbhid [ 19.925059] usbhid: USB HID core driver [ 19.942362] uvcvideo: Found UVC 1.00 device Laptop_Integrated_Webcam_2M (0c45:63ea) [ 19.947004] input: Laptop_Integrated_Webcam_2M as /devices/pci0000:00/0000:00:1a.7/usb1/1-6/1-6:1.0/input/input10 [ 19.947075] usbcore: registered new interface driver uvcvideo [ 19.947077] USB Video Class driver (1.1.1) [ 20.145232] Intel(R) Wireless WiFi Link AGN driver for Linux, in-tree: [ 20.145235] Copyright(c) 2003-2011 Intel Corporation [ 20.145327] iwlwifi 0000:04:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 [ 20.145357] iwlwifi 0000:04:00.0: setting latency timer to 64 [ 20.145402] iwlwifi 0000:04:00.0: pci_resource_len = 0x00002000 [ 20.145404] iwlwifi 0000:04:00.0: pci_resource_base = ffffc90000674000 [ 20.145407] iwlwifi 0000:04:00.0: HW Revision ID = 0x0 [ 20.145531] iwlwifi 0000:04:00.0: irq 46 for MSI/MSI-X [ 20.145613] iwlwifi 0000:04:00.0: Detected Intel(R) WiFi Link 5100 AGN, REV=0x54 [ 20.145720] iwlwifi 0000:04:00.0: L1 Enabled; Disabling L0S [ 20.167535] iwlwifi 0000:04:00.0: device EEPROM VER=0x11f, CALIB=0x4 [ 20.167538] iwlwifi 0000:04:00.0: Device SKU: 0Xf0 [ 20.167567] iwlwifi 0000:04:00.0: Tunable channels: 13 802.11bg, 24 802.11a channels [ 20.172779] fglrx: module license 'Proprietary. (C) 2002 - ATI Technologies, Starnberg, GERMANY' taints kernel. [ 20.172783] Disabling lock debugging due to kernel taint [ 20.250115] [fglrx] Maximum main memory to use for locked dma buffers: 3759 MBytes. [ 20.250567] [fglrx] vendor: 1002 device: 9553 count: 1 [ 20.251256] [fglrx] ioport: bar 1, base 0x2000, size: 0x100 [ 20.251271] pci 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 20.251277] pci 0000:01:00.0: setting latency timer to 64 [ 20.251559] [fglrx] Kernel PAT support is enabled [ 20.251578] [fglrx] module loaded - fglrx 8.96.4 [Mar 12 2012] with 1 minors [ 20.310385] iwlwifi 0000:04:00.0: loaded firmware version 8.83.5.1 build 33692 [ 20.310598] Registered led device: phy0-led [ 20.310628] cfg80211: Ignoring regulatory request Set by core since the driver uses its own custom regulatory domain [ 20.372306] ieee80211 phy0: Selected rate control algorithm 'iwl-agn-rs' [ 20.411015] psmouse serio1: synaptics: Touchpad model: 1, fw: 7.2, id: 0x1c0b1, caps: 0xd04733/0xa40000/0xa0000 [ 20.454232] input: SynPS/2 Synaptics TouchPad as /devices/platform/i8042/serio1/input/input11 [ 20.545636] cfg80211: Ignoring regulatory request Set by core since the driver uses its own custom regulatory domain [ 20.545640] cfg80211: World regulatory domain updated: [ 20.545642] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 20.545644] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 20.545647] cfg80211: (2457000 KHz - 2482000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [ 20.545649] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [ 20.545652] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 20.545654] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 20.609484] type=1400 audit(1340502633.160:2): apparmor="STATUS" operation="profile_load" name="/sbin/dhclient" pid=693 comm="apparmor_parser" [ 20.609494] type=1400 audit(1340502633.160:3): apparmor="STATUS" operation="profile_replace" name="/sbin/dhclient" pid=642 comm="apparmor_parser" [ 20.609843] type=1400 audit(1340502633.160:4): apparmor="STATUS" operation="profile_load" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=693 comm="apparmor_parser" [ 20.609852] type=1400 audit(1340502633.160:5): apparmor="STATUS" operation="profile_replace" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=642 comm="apparmor_parser" [ 20.610047] type=1400 audit(1340502633.160:6): apparmor="STATUS" operation="profile_load" name="/usr/lib/connman/scripts/dhclient-script" pid=693 comm="apparmor_parser" [ 20.610060] type=1400 audit(1340502633.160:7): apparmor="STATUS" operation="profile_replace" name="/usr/lib/connman/scripts/dhclient-script" pid=642 comm="apparmor_parser" [ 20.610476] type=1400 audit(1340502633.160:8): apparmor="STATUS" operation="profile_replace" name="/sbin/dhclient" pid=814 comm="apparmor_parser" [ 20.610829] type=1400 audit(1340502633.160:9): apparmor="STATUS" operation="profile_replace" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=814 comm="apparmor_parser" [ 20.611035] type=1400 audit(1340502633.160:10): apparmor="STATUS" operation="profile_replace" name="/usr/lib/connman/scripts/dhclient-script" pid=814 comm="apparmor_parser" [ 20.661912] snd_hda_intel 0000:00:1b.0: PCI INT A -> GSI 22 (level, low) -> IRQ 22 [ 20.661982] snd_hda_intel 0000:00:1b.0: irq 47 for MSI/MSI-X [ 20.662013] snd_hda_intel 0000:00:1b.0: setting latency timer to 64 [ 20.770289] input: HDA Intel Mic as /devices/pci0000:00/0000:00:1b.0/sound/card0/input12 [ 20.770689] snd_hda_intel 0000:01:00.1: PCI INT B -> GSI 17 (level, low) -> IRQ 17 [ 20.770786] snd_hda_intel 0000:01:00.1: irq 48 for MSI/MSI-X [ 20.770815] snd_hda_intel 0000:01:00.1: setting latency timer to 64 [ 20.994040] HDMI status: Codec=0 Pin=3 Presence_Detect=0 ELD_Valid=0 [ 20.994189] input: HDA ATI HDMI HDMI/DP,pcm=3 as /devices/pci0000:00/0000:00:01.0/0000:01:00.1/sound/card1/input13 [ 21.554799] vesafb: mode is 1024x768x32, linelength=4096, pages=0 [ 21.554802] vesafb: scrolling: redraw [ 21.554804] vesafb: Truecolor: size=0:8:8:8, shift=0:16:8:0 [ 21.557342] vesafb: framebuffer at 0xd0000000, mapped to 0xffffc90011800000, using 3072k, total 3072k [ 21.557498] Console: switching to colour frame buffer device 128x48 [ 21.557516] fb0: VESA VGA frame buffer device [ 21.987338] EXT4-fs (sda2): re-mounted. Opts: errors=remount-ro [ 22.184693] EXT4-fs (sda6): mounted filesystem with ordered data mode. Opts: (null) [ 27.362440] iwlwifi 0000:04:00.0: RF_KILL bit toggled to disable radio. [ 27.436988] init: failsafe main process (986) killed by TERM signal [ 27.970112] ppdev: user-space parallel port driver [ 28.198917] Bluetooth: Core ver 2.16 [ 28.198935] NET: Registered protocol family 31 [ 28.198937] Bluetooth: HCI device and connection manager initialized [ 28.198940] Bluetooth: HCI socket layer initialized [ 28.198941] Bluetooth: L2CAP socket layer initialized [ 28.198947] Bluetooth: SCO socket layer initialized [ 28.226135] Bluetooth: RFCOMM TTY layer initialized [ 28.226141] Bluetooth: RFCOMM socket layer initialized [ 28.226143] Bluetooth: RFCOMM ver 1.11 [ 28.445620] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [ 28.445623] Bluetooth: BNEP filters: protocol multicast [ 28.524578] type=1400 audit(1340502641.076:11): apparmor="STATUS" operation="profile_load" name="/usr/lib/cups/backend/cups-pdf" pid=1052 comm="apparmor_parser" [ 28.525018] type=1400 audit(1340502641.076:12): apparmor="STATUS" operation="profile_load" name="/usr/sbin/cupsd" pid=1052 comm="apparmor_parser" [ 28.629957] type=1400 audit(1340502641.180:13): apparmor="STATUS" operation="profile_replace" name="/sbin/dhclient" pid=1105 comm="apparmor_parser" [ 28.630325] type=1400 audit(1340502641.180:14): apparmor="STATUS" operation="profile_replace" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=1105 comm="apparmor_parser" [ 28.630535] type=1400 audit(1340502641.180:15): apparmor="STATUS" operation="profile_replace" name="/usr/lib/connman/scripts/dhclient-script" pid=1105 comm="apparmor_parser" [ 28.645266] type=1400 audit(1340502641.196:16): apparmor="STATUS" operation="profile_load" name="/usr/lib/lightdm/lightdm/lightdm-guest-session-wrapper" pid=1104 comm="apparmor_parser" **[ 28.751922] ADDRCONF(NETDEV_UP): wlan0: link is not ready** [ 28.753653] tg3 0000:08:00.0: irq 49 for MSI/MSI-X **[ 28.856127] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 28.857034] ADDRCONF(NETDEV_UP): eth0: link is not ready** [ 28.871080] type=1400 audit(1340502641.420:17): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/mission-control-5" pid=1108 comm="apparmor_parser" [ 28.871519] type=1400 audit(1340502641.420:18): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/telepathy-*" pid=1108 comm="apparmor_parser" [ 28.874905] type=1400 audit(1340502641.424:19): apparmor="STATUS" operation="profile_replace" name="/usr/lib/cups/backend/cups-pdf" pid=1113 comm="apparmor_parser" [ 28.875354] type=1400 audit(1340502641.424:20): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/cupsd" pid=1113 comm="apparmor_parser" [ 30.477976] tg3 0000:08:00.0: eth0: Link is up at 100 Mbps, full duplex [ 30.477979] tg3 0000:08:00.0: eth0: Flow control is on for TX and on for RX **[ 30.478390] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready** [ 31.110269] fglrx_pci 0000:01:00.0: irq 50 for MSI/MSI-X [ 31.110859] [fglrx] Firegl kernel thread PID: 1327 [ 31.111021] [fglrx] Firegl kernel thread PID: 1329 [ 31.111408] [fglrx] Firegl kernel thread PID: 1330 [ 31.111543] [fglrx] IRQ 50 Enabled [ 31.712938] [fglrx] Gart USWC size:1224 M. [ 31.712941] [fglrx] Gart cacheable size:486 M. [ 31.712945] [fglrx] Reserved FB block: Shared offset:0, size:1000000 [ 31.712948] [fglrx] Reserved FB block: Unshared offset:fc2b000, size:3d5000 [ 31.712950] [fglrx] Reserved FB block: Unshared offset:1fffb000, size:5000 [ 41.312020] eth0: no IPv6 routers present As you can see I get multiple instances of [ 28.856127] ADDRCONF(NETDEV_UP): eth0: link is not ready and then finally it becomes read and I get the message [ 30.478390] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready. I searched askubuntun, ubuntuforum, and the web but couldn't find a solution. Any help would be very much appreciated. Here is the bootchart

    Read the article

  • New Study Guide: "Oracle Solaris 11 System Administration"

    - by Harold Green
    A new helpful resource for Solaris 11 exam preparation has just been released. "Oracle Solaris 11 System Administration" by author and educator Bill Calkins covers effective installation and administration of an Oracle Solaris 11 system. In addition to being a valuable, comprehensive study guide, the book also serves as a complete reference guide for the everyday tasks of an Oracle Solaris System Administrator. This book can be a valuable addition to your preparation for the Oracle Solaris 11 Advanced System Administration (1Z1-822) certification exam. This exam, combined with the Oracle Certified Associate, Oracle Solaris 11 System Administrator (OCA) certification and a training requirement will earn you the Oracle Certified Professional, Oracle Solaris 11 System Administrator (OCP) certification. This valuable credential is designed for Oracle Solaris System Administrators with a strong foundation in the Oracle Solaris 11 Operating System as well as a fundamental understanding of the UNIX operating system, commands and utilities. This certification covers topics on core elements such as: configuring network interfaces, managing swap configurations, crash dumps, and core files. The 822 exam is currently in beta at the greatly discounted rate of $50 USD, but the beta period will soon be closing (likely the end of this month/June 2013), so be take advantage of the opportunity to be one of the first to hold this new certification.  Bill Calkins also recently posted some tips for taking Oracle Solaris 11 certification exams.

    Read the article

  • The Return Of __FILE__ And __LINE__ In .NET 4.5

    - by Alois Kraus
    Good things are hard to kill. One of the most useful predefined compiler macros in C/C++ were __FILE__ and __LINE__ which do expand to the compilation units file name and line number where this value is encountered by the compiler. After 4.5 versions of .NET we are on par with C/C++ again. It is of course not a simple compiler expandable macro it is an attribute but it does serve exactly the same purpose. Now we do get CallerLineNumberAttribute  == __LINE__ CallerFilePathAttribute        == __FILE__ CallerMemberNameAttribute  == __FUNCTION__ (MSVC Extension)   The most important one is CallerMemberNameAttribute which is very useful to implement the INotifyPropertyChanged interface without the need to hard code the name of the property anymore. Now you can simply decorate your change method with the new CallerMemberName attribute and you get the property name as string directly inserted by the C# compiler at compile time.   public string UserName { get { return _userName; } set { _userName=value; RaisePropertyChanged(); // no more RaisePropertyChanged(“UserName”)! } } protected void RaisePropertyChanged([CallerMemberName] string member = "") { var copy = PropertyChanged; if(copy != null) { copy(new PropertyChangedEventArgs(this, member)); } } Nice and handy. This was obviously the prime reason to implement this feature in the C# 5.0 compiler. You can repurpose this feature for tracing to get your hands on the method name of your caller along other stuff very fast now. All infos are added during compile time which is much faster than other approaches like walking the stack. The example on MSDN shows the usage of this attribute with an example public static void TraceMessage(string message, [CallerMemberName] string memberName = "", [CallerFilePath] string sourceFilePath = "", [CallerLineNumber] int sourceLineNumber = 0) { Console.WriteLine("Hi {0} {1} {2}({3})", message, memberName, sourceFilePath, sourceLineNumber); }   When I do think of tracing I do usually want to have a API which allows me to Trace method enter and leave Trace messages with a severity like Info, Warning, Error When I do print a trace message it is very useful to print out method and type name as well. So your API must either be able to pass the method and type name as strings or extract it automatically via walking back one Stackframe and fetch the infos from there. The first glaring deficiency is that there is no CallerTypeAttribute yet because the C# compiler team was not satisfied with its performance.   A usable Trace Api might therefore look like   enum TraceTypes { None = 0, EnterLeave = 1 << 0, Info = 1 << 1, Warn = 1 << 2, Error = 1 << 3 } class Tracer : IDisposable { string Type; string Method; public Tracer(string type, string method) { Type = type; Method = method; if (IsEnabled(TraceTypes.EnterLeave,Type, Method)) { } } private bool IsEnabled(TraceTypes traceTypes, string Type, string Method) { // Do checking here if tracing is enabled return false; } public void Info(string fmt, params object[] args) { } public void Warn(string fmt, params object[] args) { } public void Error(string fmt, params object[] args) { } public static void Info(string type, string method, string fmt, params object[] args) { } public static void Warn(string type, string method, string fmt, params object[] args) { } public static void Error(string type, string method, string fmt, params object[] args) { } public void Dispose() { // trace method leave } } This minimal trace API is very fast but hard to maintain since you need to pass in the type and method name as hard coded strings which can change from time to time. But now we have at least CallerMemberName to rid of the explicit method parameter right? Not really. Since any acceptable usable trace Api should have a method signature like Tracexxx(… string fmt, params [] object args) we not able to add additional optional parameters after the args array. If we would put it before the format string we would need to make it optional as well which would mean the compiler would need to figure out what our trace message and arguments are (not likely) or we would need to specify everything explicitly just like before . There are ways around this by providing a myriad of overloads which in the end are routed to the very same method but that is ugly. I am not sure if nobody inside MS agrees that the above API is reasonable to have or (more likely) that the whole talk about you can use this feature for diagnostic purposes was not a core feature at all but a simple byproduct of making the life of INotifyPropertyChanged implementers easier. A way around this would be to allow for variable argument arrays after the params keyword another set of optional arguments which are always filled by the compiler but I do not know if this is an easy one. The thing I am missing much more is the not provided CallerType attribute. But not in the way you would think of. In the API above I did add some filtering based on method and type to stay as fast as possible for types where tracing is not enabled at all. It should be no more expensive than an additional method call and a bool variable check if tracing for this type is enabled at all. The data is tightly bound to the calling type and method and should therefore become part of the static type instance. Since extending the CLR type system for tracing is not something I do expect to happen I have come up with an alternative approach which allows me basically to attach run time data to any existing type object in super fast way. The key to success is the usage of generics.   class Tracer<T> : IDisposable { string Method; public Tracer(string method) { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public void Dispose() { if (TraceData<T>.Instance.Enabled.HasFlag(TraceTypes.EnterLeave)) { } } public static void Info(string fmt, params object[] args) { } /// <summary> /// Every type gets its own instance with a fresh set of variables to describe the /// current filter status. /// </summary> /// <typeparam name="T"></typeparam> internal class TraceData<UsingType> { internal static TraceData<UsingType> Instance = new TraceData<UsingType>(); public bool IsInitialized = false; // flag if we need to reinit the trace data in case of reconfigured trace settings at runtime public TraceTypes Enabled = TraceTypes.None; // Enabled trace levels for this type } } We do not need to pass the type as string or Type object to the trace Api. Instead we define a generic Api that accepts the using type as generic parameter. Then we can create a TraceData static instance which is due to the nature of generics a fresh instance for every new type parameter. My tests on my home machine have shown that this approach is as fast as a simple bool flag check. If you have an application with many types using tracing you do not want to bring the app down by simply enabling tracing for one special rarely used type. The trace filter performance for the types which are not enabled must be therefore the fasted code path. This approach has the nice side effect that if you store the TraceData instances in one global list you can reconfigure tracing at runtime safely by simply setting the IsInitialized flag to false. A similar effect can be achieved with a global static Dictionary<Type,TraceData> object but big hash tables have random memory access semantics which is bad for cache locality and you always need to pay for the lookup which involves hash code generation, equality check and an indexed array access. The generic version is wicked fast and allows you to add more features to your tracing Api with minimal perf overhead. But it is cumbersome to write the generic type argument always explicitly and worse if you do refactor code and move parts of it to other classes it might be that you cannot configure tracing correctly. I would like therefore to decorate my type with an attribute [CallerType] class Tracer<T> : IDisposable to tell the compiler to fill in the generic type argument automatically. class Program { static void Main(string[] args) { using (var t = new Tracer()) // equivalent to new Tracer<Program>() { That would be really useful and super fast since you do not need to pass any type object around but you do have full type infos at hand. This change would be breaking if another non generic type exists in the same namespace where now the generic counterpart would be preferred. But this is an acceptable risk in my opinion since you can today already get conflicts if two generic types of the same name are defined in different namespaces. This would be only a variation of this issue. When you do think about this further you can add more features like to trace the exception in your Dispose method if the method is left with an exception with that little trick I did write some time ago. You can think of tracing as a super fast and configurable switch to write data to an output destination or to execute alternative actions. With such an infrastructure you can e.g. Reconfigure tracing at run time. Take a memory dump when a specific method is left with a specific exception. Throw an exception when a specific trace statement is hit (useful for testing error conditions). Execute a passed delegate which e.g. dumps additional state when enabled. Write data to an in memory ring buffer and dump it when specific events do occur (e.g. method is left with an exception, triggered from outside). Write data to an output device. …. This stuff is really useful to have when your code is in production on a mission critical server and you need to find the root cause of sporadic crashes of your application. It could be a buggy graphics card driver which throws access violations into your application (ok with .NET 4 not anymore except if you enable a compatibility flag) where you would like to have a minidump or you have reached after two weeks of operation a state where you need a full memory dump at a specific point in time in the middle of an transaction. At my older machine I do get with this super fast approach 50 million traces/s when tracing is disabled. When I do know that tracing is enabled for this type I can walk the stack by using StackFrameHelper.GetStackFramesInternal to check further if a specific action or output device is configured for this method which is about 2-3 times faster than the regular StackTrace class. Even with one String.Format I am down to 3 million traces/s so performance is not so important anymore since I do want to do something now. The CallerMemberName feature of the C# 5 compiler is nice but I would have preferred to get direct access to the MethodHandle and not to the stringified version of it. But I really would like to see a CallerType attribute implemented to fill in the generic type argument of the call site to augment the static CLR type data with run time data.

    Read the article

  • Unable to install Ubuntu 12.04.1 on Virtualbox on Windows 7 Host

    - by arcube
    I would like to install Ubuntu 12.04.1 in a virtual box on windows 7 host however I get a black screen after selecting to install Ubuntu i.e. right after choosing English language I have tried the 32bit and 64bit version desktop version, have also read thru some guides e.g. http://www.psychocats.net/ubuntu/virtualbox but I could not find any solution to fix this problem. BTW I have Ubuntu installed using Virtual PC (not running at the same time), also I have mac OS (mountain lion) installed using Virtual Box so I do not think it is a problem with the virtual box or windows 7 host. My VM configuration is with 1 core and 1 GB of ram using 20GB VDI; rest is default config. My hardware is Core i7 3770k / Asus P8Z77-V Deluxe / GSkill 32GB which I believe should be more than sufficient to run another VM. I have moved my user profile from SSD to HDD via junction however IMHO it should not matter. Has anyone else come across this problem and found a solution? Any help/thoughts on how to solve this would be very much appreciated.

    Read the article

  • Developing Ext JS Charts in NetBeans IDE

    - by Geertjan
    I took my first tentative steps into the world of Ext JS charts today, in NetBeans IDE 7.4. Click to enlarge the image. I will make a screencast soon showing how charts such as the above can be created with NetBeans IDE and Ext JS. Setting up Ext JS is easy in NetBeans IDE because there's a JavaScript library browser, by means of which I can browse for the Ext JS libraries that I need and then NetBeans IDE sets up the project for me. The JavaScript code shown above comes directly from here: http://www.quizzpot.com/courses/learning-ext-js-3/articles/chart-series The index.html is as follows: <html> <head> <title>TODO supply a title</title> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width"> <link rel="stylesheet" href="js/libs/extjs/resources/css/ext-all.css"/> <script src="js/libs/ext-core/ext-core.js"></script> <script src="js/libs/extjs/adapter/ext/ext-base-debug.js"></script> <script src="js/libs/extjs/ext-all-debug.js"></script> <script src="app.js"></script> </head> <body> </body> </html> More info on Ext JS: http://docs.sencha.com/extjs/4.1.3/ By the way, quite a few other articles are out there on Ext JS and NetBeans IDE, such as these, which I will be learning from during the coming days: http://netbeans.dzone.com/extjs-rest-netbeans http://netbeans.dzone.com/articles/create-your-first-extjs-4 http://netbeans.dzone.com/articles/mixing-extjs-json-p-and-java

    Read the article

  • How do "custom software companies" deal with technical debt?

    - by andy
    What are "custom software companies"? By "custom software companies" I mean companies that make their money primarily from building custom, one off, bits of software. Example are agencies or middle-ware companies, or contractors/consultants like Redify. What's the opposite of "custom software companies"? The oposite of the above business model are companies that focus on long term products, whether they be deployable desktop/mobile apps, or SaaS software. A sure fire way to build up technical debt: I work for a company that attempts to focus on a suite of SaaS products. However, due to certain constraints we sometimes end up bending to the will of certain clients and we end building bits of custom software that can only be used for that client. This is a sure fire way to incur technical debt. Now we have a bit of software to maintain that adds nothing to our core product. If custom work is a sure fire way to build technical debt, how do agencies handle it? So that got me thinking. Companies who don't have a core product as the center of their business model, well they're always doing custom software work. How do they cope with the notion of technical debt? How does it not drive them into technical bankruptcy?

    Read the article

  • Oracle Cloud Solutions @ Cloud Expo East (June 10-12)

    - by Gene Eun
    Oracle is proud to be the Platinum Sponsor at next week's Cloud Expo East (June 10-12) at the Javits Center in New York City.  This is the fourth consecutive year Oracle has sponsored Cloud Expo. As in years past, Oracle has a full schedule of sessions shown below. We'd love to have you be our guest at Cloud Expo East and have you attend one of our sessions and hear more about our thought leadership and leading solutions in the Cloud and Big Data. We'll also have booth #207, so please stop by and see a demo of many of our cloud offerings. Date  Time  Session Title  Track  Room Tuesday, June 10 4:40 pm - 5:15 pm Top 5 Best Practices for your Application Platform As a Service Cloud Business and the API Economy | Deploying the Cloud TBD Wednesday, June 11 9:10 am - 10:10 am Cloud Odyssey:  A Hero’s Quest All Tracks (Keynote) Keynote Hall Wednesday, June 11 10:15 am - 10:45 am Big Data Management System: Smart SWL Processing Across Hadoop and Your Data Warehouse All Tracks (General Session) Keynote Hall Wednesday, June 11 2:50 pm - 3:25 pm Plug into the Cloud: Your Blueprint to Database as a Service Mobile | Hot Topics TBD Wednesday, June 11 2:50 pm - 3:25 pm From Supply-led to Demand-led: Lead Your IT to Better Serve Your Users Cloud Business and the API Economy | Deploying the Cloud TBD Thursday, June 12 2:50 pm - 3:25 pm Reduce Complexity and Accelerate Innovation with IaaS and PaaS Cloud Business and the API Economy | Deploying the Cloud TBD At Cloud Expo East, you'll get to learn about and experience the latest in Cloud and Big Data. If you don't have a pass to Cloud Expo, no problem. Oracle is giving away FREE VIP Gold Passes! We would love to have you attend Cloud Expo on us. Just go to Oracle's Cloud Expo 2014 event registration page and follow the instructions for a complimentary pass. Stay tuned to this blog and follow us on Twitter (@OracleCloudZone) during and after Cloud Expo for more insight and observations about this year's conference.

    Read the article

  • links for 2011-02-17

    - by Bob Rhubart
    ArchitectACEs - Oracle Wiki Putting a Face on the Architect ACE The Oracle ACE s listed here have identified themselves, or have been identified by fellow ACEs, as software architects. As... (tags: ping.fm) Debra's thoughts on Oracle and User Groups: I did it - I did the Fusion UX Demo Oracle ACE Director Debra Lilley shares her experience in presenting a Fusion Applications demo at RMOUG. (tags: oracle otn oracleace) The Blas from Pas: JRuby Script to Monitor a Oracle WebLogic GridLink Data Source Remotely "In WebLogic 10.3.4 release, a single data source implementation has been introduced to support Oracle RAC cluster. To simplify and consolidate its support for Oracle RAC, WebLogic Server has provided a single data source that is enhanced to support the capabilities of Oracle RAC." (tags: oracle otn weblogic) Show Notes: Bob Hensle on IT Strategies from Oracle (ArchBeat) In Part 1 Bob Hensle talked about the various documents in the IT Strategies from Oracle library. In Part 2 (now available) Bob talks about how SOA and other factors are reflected in those documents. (tags: oracle otn entarch podcast) PODCAST: Examining the state of EA and findings of recent survey | Open Group Blog A transcript of a podcast panel discussion on the findings from a study on the current state and future direction of enterprise architecture from The Open Group Conference, San Diego 2011. (tags: entarch opengroup) A Virtual Dilemma (Antony Reynolds' Blog) SOA author Anthony Reynolds shares a solution. (tags: oracle otn soa) Webcast: Live Online Forum: Oracle Security - February 24, 9:00am PT Speakers: Mary Ann Davidson, Chief Security Officer, Oracle; Tom Kyte, Senior Technical Architect, Oracle; Jeff Margolies, Partner, Security Practice, Accenture; Vipin Samar, VP, Database Security Product Development Oracle; and Nishant Kaushik, Chief Strategist, Identity and Access Management. (tags: oracle security) Obama banks on cloud, consolidation, to hold down IT costs | Computerworld NZ President Obama's fiscal 2012 budget proposal keeps IT spending almost flat compared to fiscal 2010 mostly due to the consolidation of data centers and a shift to cloud computing systems. (tags: ping.fm)

    Read the article

  • Should mock objects for tests be created at a high or low level

    - by Danack
    When creating unit tests for those other objects, what is the best way to create mock objects that provide data to other objects. Should they be created at a 'high level' and intercept the calls as soon as possible, or should they be done at a 'low level' and so make as much as the real code still be called? e.g. I'm writing a test for some code that requires a NoteMapper object that allows Notes to be loaded from the DB. class NoteMapper { function getNote($sqlQueryFactory, $noteID) { // Create an SQL query from $sqlQueryFactory // Run that SQL // if null // return null // else // return new Note($dataFromSQLQuery) } } I could either mock this object at a high level by creating a mock NoteMapper object, so that there are no calls to the SQL at all e.g. class MockNoteMapper { function getNote($sqlQueryFactory, $noteID) { //$mockData = {'Test Note title', "Test note text" } // return new Note($mockData); } } Or I could do it at a very low level, by creating a MockSQLQueryFactory that instead of actually querying the database just provides mock data back, and passing that to the current NoteMapper object. It seems that creating mocks at a high level would be easier in the short term, but that in the long term doing it at a low level would be more powerful and possibly allow more automation of tests e.g. by recording data in an out of a DB and then replaying that data for tests. Is there a recommended way of creating mocks? Are there any hard and fast rules about which are better, or should they both be used where appropriate?

    Read the article

  • Oracle Warehouse Builder és Enterprise ETL

    - by Fekete Zoltán
    Friss és ropogós az adatlap!!! Fogyasszátok egészséggel: ODI Enterprise Edition: Warehouse Builder Enterprise ETL white paper. A jó hír: minden megvásárolt Oracle Database-hez ingyenese használható az Oracle Warehouse Builder alap (core) funkcionalitása. Mi is az az OWB core funkcionalitás, és mit használhatunk az opciókban? Az Enterprise ETL funkcionalitás az Oracle Data Integrator Enterprise Edition licensz részeként érheto el az OWB-hez. Azok a funkciók, amik csak az ODI EE licensszel érhetok el (a korábbi OWB Enterprise ETL opció is ennek a része) megtekinthetok itt is a szöveg alján. Ezek: - Transportable ETL modules, multiple configurations, and pluggable mappings - Operators for pluggable mapping, pluggable mapping input signature, pluggable mapping output signature - Design Environment Support for RAC - Metadata change propagation - Schedulable Mappings and Process Flows - Slowing Changing Dimensions (SCD) Type 2 and 3 - XML Files as a target - Target load ordering - Seeded spatial and streams transformations - Process Flow Activity templates - Process Flow variables support - Process Flow looping activities such as For Loop and While Loop - Process Flow Route and Notification activities - Metadata lineage and impact analysis - Metadata Extensibility - Deployment to Discoverer EUL - Deployment to Oracle BI Beans catalog Tehát ha komolyabb környezetben szeretném használni az OWB-t, több környezetbe deployálni, stb, akkor szükség van az ODI EE licenszre is. ODI Enterprise Edition: Warehouse Builder Enterprise ETL white paper.

    Read the article

  • Exalytics Increases Customer Revenue, and Saves Time, Risk & Cost

    - by Mike.Hallett(at)Oracle-BI&EPM
    We are getting some great proof point stories now from our customers who are succeeding with the Exalytics in-memory system for OBI and Essbase.  See below for some recent testimony: San Diego Unified School District Harnesses Attendance, Procurement, and Operational Data with Oracle Exalytics, Generating $4.4 Million in Savings: according to independent assessment by Mainstay Salire, the district is on track to achieve substantial benefits from the Oracle Exalytics solution, including an $8.25 million increase in attendance revenue, $75,000 a year savings in operational efficiencies, and $1 million in hardware cost avoidance. NilsonGroup chooses Oracle Exalytics In-Memory Machine as their solution to access critical data to keep its stores competitive with real-time Mobile BI: it took only “3 days to get up and running” with Exalytics.  Video Nykredit, in the Danish Financial Sector, describes their experiences from testing the Exalytics Business Intelligence Machine: “it was up and running within 4 days” with “more intuitive dashboards” and “up to 70x better performance” and “cheaper maintenance and lower total cost of ownership”. Video Sodexo chose Oracle Exalytics as their business analytics platform; accelerating Essbase “more than 8x” performance for more than 2,000 Excel-addin users, “significantly changing how people in information management now deal with data”.  Video Polk, Savvis, Nykredit, and Key Energy describe testing of the Oracle Exalytics In-Memory Machine: to “reach more users than we ever have before”, “to fly through the data without impeding the analytic process”, “drive our enterprise groups into this tool instead of having departmental solutions”, and the “advanced visualisation this product enables”.  Video

    Read the article

  • App_offline.htm and SharePoint and wholly contained images&hellip;

    - by Shawn Cicoria
    The question came up today if we could use an “app_offline.htm” file along with HTML in that file that would reference images. First, I wasn’t 100% sure if the app_offline.htm would work, but it sure did.  Since it’s just the Asp.net hosting process that detects the file, it circumvents loading any HttpApplications (SharePoint) beyond just serving up the HTML content. The second question was about having something more than text, specifically <img> tags.  So, since the HttpHandlers are taking all requests for all resources through the Asp.net pipeline, as soon as the app_offline.htm file is there, nothing else will get served from within that web application.    Sure, we could host the file (images) outside the web app, but what fun would that be? So, I found this link on using an image in app_offline.htm http://pbodev.wordpress.com/2009/12/20/app_offline-htm-with-an-image-yes-we-can/ Turns out, the src tag (in fact many tags) can take a stream of data represented by a mime type and base64 encoding inline – such as: <img style="height:515px;width:700px;border-width:0px;" src="data:image/jpeg;base64,/9j/4AAQSkZJRgA   One of the problems we had was the image was too large; so, sliced up the image, but ended up with spaces between each of the slices.  Low and behold, it works with CSS as well.   <style type="text/css"> .Slice_1_jpg { position: absolute; left:0px; top:0px; width:1011px; height:148px; background: url("data:image/jpeg;base64,/9j/4AAQSkZ   And the body is as follows: <body> <div class="Slice_1_jpg"> </div>   For this, I wrote a little Asp.Net site that, using a file upload control, will generate the necessary contents of what needs to go in the “data” value for the stream.  A link to the code is here: http://cicoria.com/downloads/CreateBase64OfImage.zip

    Read the article

  • WP E Commerce Safe Mode restriction error [on hold]

    - by Mustafa Kamal
    I have my online shop, created with WP Ecommerce getting broken after I moved it to another server. I could be sure that the problem comes from WP Ecommerce because when I disable that plugin. Everything run as expected. This is the exact error message Warning: session_start() [function.session-start]: SAFE MODE Restriction in effect. The script whose uid is 515 is not allowed to access /tmp owned by uid 0 in /home/mikalu/public_html/wp-content/plugins/wp-e-commerce/wpsc-core/wpsc-constants.php on line 17 Fatal error: session_start() [<a href='function.session-start'>function.session-start</a>]: Failed to initialize storage module: files (path: ) in /home/mikalu/public_html/wp-content/plugins/wp-e-commerce/wpsc-core/wpsc-constants.php on line 17 I've tried to turn off safe mode on my php configuration. nothing happens. the error's still there. I thought it was some kind of permission issue, so I tried to change /tmp permission to 777. Nothing happens. I googled it some more and suspect it might have something to do with fastCGI configuration and stuff. Which I totally don't understand. My googling result mostly suggest me to consult the web hosting provider or even to move to another host. But in this case, I am the owner of the server (VPS with cPanel/WHM). And I don't have any idea how to solve this kind of problem.

    Read the article

  • The Stub Proto: Not Just For Stub Objects Anymore

    - by user9154181
    One of the great pleasures of programming is to invent something for a narrow purpose, and then to realize that it is a general solution to a broader problem. In hindsight, these things seem perfectly natural and obvious. The stub proto area used to build the core Solaris consolidation has turned out to be one of those things. As discussed in an earlier article, the stub proto area was invented as part of the effort to use stub objects to build the core ON consolidation. Its purpose was merely as a place to hold stub objects. However, we keep finding other uses for it. It turns out that the stub proto should be more properly thought of as an auxiliary place to put things that we would like to put into the proto to help us build the product, but which we do not wish to package or deliver to the end user. Stub objects are one example, but private lint libraries, header files, archives, and relocatable objects, are all examples of things that might profitably go into the stub proto. Without a stub proto, these items were handled in a variety of ad hoc ways: If one part of the workspace needed private header files, libraries, or other such items, it might modify its Makefile to reach up and over to the place in the workspace where those things live and use them from there. There are several problems with this: Each component invents its own approach, meaning that programmers maintaining the system have to invest extra effort to understand what things mean. In the past, this has created makefile ghettos in which only the person who wrote the makefiles feels confident to modify them, while everyone else ignores them. This causes many difficulties and benefits no one. These interdependencies are not obvious to the make, utility, and can lead to races. They are not obvious to the human reader, who may therefore not realize that they exist, and break them. Our policy in ON is not to deliver files into the proto unless those files are intended to be packaged and delivered to the end user. However, sometimes non-shipping files were copied into the proto anyway, causing a different set of problems: It requires a long list of exceptions to silence our normal unused proto item error checking. In the past, we have accidentally shipped files that we did not intend to deliver to the end user. Mixing cruft with valuable items makes it hard to discern which is which. The stub proto area offers a convenient and robust solution. Files needed to build the workspace that are not delivered to the end user can instead be installed into the stub proto. No special exceptions or custom make rules are needed, and the intent is always clear. We are already accessing some private lint libraries and compilation symlinks in this manner. Ultimately, I'd like to see all of the files in the proto that have a packaging exception delivered to the stub proto instead, and for the elimination of all existing special case makefile rules. This would include shared objects, header files, and lint libraries. I don't expect this to happen overnight — it will be a long term case by case project, but the overall trend is clear. The Stub Proto, -z assert_deflib, And The End Of Accidental System Object Linking We recently used the stub proto to solve an annoying build issue that goes back to the earliest days of Solaris: How to ensure that we're linking to the OS bits we're building instead of to those from the running system. The Solaris product is made up of objects and files from a number of different consolidations, each of which is built separately from the others from an independent code base called a gate. The core Solaris OS consolidation is ON, which stands for "Operating System and Networking". You will frequently also see ON called the OSnet. There are consolidations for X11 graphics, the desktop environment, open source utilities, compilers and development tools, and many others. The collection of consolidations that make up Solaris is known as the "Wad Of Stuff", usually referred to simply as the WOS. None of these consolidations is self contained. Even the core ON consolidation has some dependencies on libraries that come from other consolidations. The build server used to build the OSnet must be running a relatively recent version of Solaris, which means that its objects will be very similar to the new ones being built. However, it is necessarily true that the build system objects will always be a little behind, and that incompatible differences may exist. The objects built by the OSnet link to other objects. Some of these dependencies come from the OSnet, while others come from other consolidations. The objects from other consolidations are provided by the standard library directories on the build system (/lib, /usr/lib). The objects from the OSnet itself are supposed to come from the proto areas in the workspace, and not from the build server. In order to achieve this, we make use of the -L command line option to the link-editor. The link-editor finds dependencies by looking in the directories specified by the caller using the -L command line option. If the desired dependency is not found in one of these locations, ld will then fall back to looking at the default locations (/lib, /usr/lib). In order to use OSnet objects from the workspace instead of the system, while still accessing non-OSnet objects from the system, our Makefiles set -L link-editor options that point at the workspace proto areas. In general, this works well and dependencies are found in the right places. However, there have always been failures: Building objects in the wrong order might mean that an OSnet dependency hasn't been built before an object that needs it. If so, the dependency will not be seen in the proto, and the link-editor will silently fall back to the one on the build server. Errors in the makefiles can wipe out the -L options that our top level makefiles establish to cause ld to look at the workspace proto first. In this case, all objects will be found on the build server. These failures were rarely if ever caught. As I mentioned earlier, the objects on the build server are generally quite close to the objects built in the workspace. If they offer compatible linking interfaces, then the objects that link to them will behave properly, and no issue will ever be seen. However, if they do not offer compatible linking interfaces, the failure modes can be puzzling and hard to pin down. Either way, there won't be a compile-time warning or error. The advent of the stub proto eliminated the first type of failure. With stub objects, there is no dependency ordering, and the necessary stub object dependency will always be in place for any OSnet object that needs it. However, makefile errors do still occur, and so, the second form of error was still possible. While working on the stub object project, we realized that the stub proto was also the key to solving the second form of failure caused by makefile errors: Due to the way we set the -L options to point at our workspace proto areas, any valid object from the OSnet should be found via a path specified by -L, and not from the default locations (/lib, /usr/lib). Any OSnet object found via the default locations means that we've linked to the build server, which is an error we'd like to catch. Non-OSnet objects don't exist in the proto areas, and so are found via the default paths. However, if we were to create a symlink in the stub proto pointing at each non-OSnet dependency that we require, then the non-OSnet objects would also be found via the paths specified by -L, and not from the link-editor defaults. Given the above, we should not find any dependency objects from the link-editor defaults. Any dependency found via the link-editor defaults means that we have a Makefile error, and that we are linking to the build server inappropriately. All we need to make use of this fact is a linker option to produce a warning when it happens. Although warnings are nice, we in the OSnet have a zero tolerance policy for build noise. The -z fatal-warnings option that was recently introduced with -z guidance can be used to turn the warnings into fatal build errors, forcing the programmer to fix them. This was too easy to resist. I integrated 7021198 ld option to warn when link accesses a library via default path PSARC/2011/068 ld -z assert-deflib option into snv_161 (February 2011), shortly after the stub proto was introduced into ON. This putback introduced the -z assert-deflib option to the link-editor: -z assert-deflib=[libname] Enables warning messages for libraries specified with the -l command line option that are found by examining the default search paths provided by the link-editor. If a libname value is provided, the default library warning feature is enabled, and the specified library is added to a list of libraries for which no warnings will be issued. Multiple -z assert-deflib options can be specified in order to specify multiple libraries for which warnings should not be issued. The libname value should be the name of the library file, as found by the link-editor, without any path components. For example, the following enables default library warnings, and excludes the standard C library. ld ... -z assert-deflib=libc.so ... -z assert-deflib is a specialized option, primarily of interest in build environments where multiple objects with the same name exist and tight control over the library used is required. If is not intended for general use. Note that the definition of -z assert-deflib allows for exceptions to be specified as arguments to the option. In general, the idea of using a symlink from the stub proto is superior because it does not clutter up the link command with a long list of objects. When building the OSnet, we usually use the plain from of -z deflib, and make symlinks for the non-OSnet dependencies. The exception to this are dependencies supplied by the compiler itself, which are usually found at whatever arbitrary location the compiler happens to be installed at. To handle these special cases, the command line version works better. Following the integration of the link-editor change, I made use of -z assert-deflib in OSnet builds with 7021896 Prevent OSnet from accidentally linking to build system which integrated into snv_162 (March 2011). Turning on -z assert-deflib exposed between 10 and 20 existing errors in our Makefiles, which were all fixed in the same putback. The errors we found in our Makefiles underscore how difficult they can be prevent without an automatic system in place to catch them. Conclusions The stub proto is proving to be a generally useful construct for ON builds that goes beyond serving as a place to hold stub objects. Although invented to hold stub objects, it has already allowed us to simplify a number of previously difficult situations in our makefiles and builds. I expect that we'll find uses for it beyond those described here as we go forward.

    Read the article

  • unable to upgrade to 12.10 beta 2 from 12.04 [closed]

    - by user85959
    Possible Duplicate: There's an issue with an Alpha/Beta Release of Ubuntu, what should I do? authenticate 'quantal.tar.gz' against 'quantal.tar.gz.gpg' exception from gpg: GnuPG exited non-zero, with code 2 Debug information: gpg: Signature made Fri 28 Sep 2012 03:55:55 AM IST using DSA key ID 437D05B5 gpg: /tmp/update-manager-bpIptI/trustdb.gpg: trustdb created gpg: Good signature from "Ubuntu Archive Automatic Signing Key " gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 6302 39CC 130E 1A7F D81A 27B1 4097 6EAF 437D 05B5 gpg: Signature made Fri 28 Sep 2012 03:55:55 AM IST using RSA key ID C0B21F32 gpg: Can't check signature: public key not found Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/UpdateManager/UpdateManager.py", line 1110, in on_button_dist_upgrade_clicked fetcher.run() File "/usr/lib/python2.7/dist-packages/UpdateManager/Core/DistUpgradeFetcherCore.py", line 253, in run _("Authenticating the upgrade failed. There may be a problem " File "/usr/lib/python2.7/dist-packages/UpdateManager/DistUpgradeFetcher.py", line 41, in error return error(self.window_main, summary, message) File "/usr/lib/python2.7/dist-packages/UpdateManager/Core/utils.py", line 384, in error d.window.set_functions(Gdk.FUNC_MOVE) RuntimeError: unable to get the value gpg: /tmp/tmplqoLDu/trustdb.gpg: trustdb created

    Read the article

  • Fuzzing for Security

    - by Sylvain Duloutre
    Yesterday, I attended an internal workshop about ethical hacking. Hacking skills like fuzzing can be used to quantitatively assess and measure security threats in software.  Fuzzing is a software testing technique used to discover coding errors and security loopholes in software, operating systems or networks by injecting massive amounts of random data, called fuzz, to the system in an attempt to make it crash. If the program contains a vulnerability that can leads to an exception, crash or server error (in the case of web apps), it can be determined that a vulnerability has been discovered.A fuzzer is a program that generates and injects random (and in general faulty) input to an application. Its main purpose is to make things easier and automated.There are typically two methods for producing fuzz data that is sent to a target, Generation or Mutation. Generational fuzzers are capable of building the data being sent based on a data model provided by the fuzzer creator. Sometimes this is simple and dumb as sending random bytes, swapping bytes or much smarter by knowing good values and combining them in interesting ways.Mutation on the other hand starts out with a known good "template" which is then modified. However, nothing that is not present in the "template" or "seed" will be produced.Generally fuzzers are good at finding buffer overflow, DoS, SQL Injection, Format String bugs etc. They do a poor job at finding vulnerabilites related to information disclosure, encryption flaws and any other vulnerability that does not cause the program to crash.  Fuzzing is simple and offers a high benefit-to-cost ratio but does not replace other proven testing techniques.What is your computer doing over the week-end ?

    Read the article

  • Write DAX queries in Report Builder #ssrs #dax #ssas #tabular

    - by Marco Russo (SQLBI)
    If you use Report Builder with Reporting Services, you can use DAX queries even if the editor for Analysis Services provider does not support DAX syntax. In fact, the DMX editor that you can use in Visual Studio editor of Reporting Services (see a previous post on that), is not available in Report Builder. However, as Sagar Salvi commented in this Microsoft Connect entry, you can use the DAX query text in the query of a Dataset by using the OLE DB provider instead of the Analysis Services one. I think it’s a good idea to show the steps required. First, create a DataSet using the OLE DB connection type, and provide the connection string the provider (Provider), the server name (Data Source) and the database name (Initial Catalog), such as: Provider=MSOLAP;Data Source=SERVERNAME\\TABULAR;Initial Catalog=AdventureWorks Tabular Model SQL 2012 Then, create a Dataset using the data source previously defined, select the Text query type, and write the DAX code in the Query pane: You can also use the Query Designer window, that doesn’t provide any particular help in writing the DAX query, but at least can show a preview of the result of the query execution. I hope DAX will get better editors in the future… in the meantime, remember you can use DAX Studio to write and test your DAX queries, and DAX Formatter to improve their readability!If you want to learn the DAX Query Language, I suggest you watching my video Data Analysis Expressions as a Query Language on Project Botticelli!

    Read the article

  • How to create single integer index value based on two integers where first is unlimited?

    - by Jan Doggen
    I have table data containing an integer value X ranging from 1.... unknown, and an integer value Y ranging from 1..9 The data need to be presented in order 'X then Y'. For one visual component I can set multiple index names: X;Y But for another component I need a one-dimensional integer value as index (sort order). If X were limited to an upper bound of say 100, the one-dimensional value could simply be X*100 + Y. If the one-dimensional value could have been a real, it could be X + Y/10. But if I want to keep X unlimited, is there a way to calculate a single integer 'indexing' value from X and Y? [Added] Background information: I have a Gantt/TreeList component where the tasks are ordered on a TaskIndex integer. This does not need to be a real database field, I can make it a calculated field in the underlying client dataset. My table data is e.g. as follows: ID Baseline ParentID 1 0 0 (task) 5 2 1 (baseline) 8 1 1 (baseline) 9 0 0 (task) 12 0 0 (task) 16 1 12 (baseline) Task 1 has two baselines numbered 1 and 2 (IDs 8 and 5) Task 9 has no baselines Task 12 has one baseline numbered 1 (ID 16) Baselines number 1-9 (the Y variable from my question); 0 or null identify the tasks ID's are unlimited (the X variable) The user plays with visibility of baselines, e.g. he wants to see all tasks with all baselines labeled 1. This is done by updating a filter on the table. Right now I constantly have to recalculate TaskIndex after changing the filter (looping through records). It would be nice if TaskIndex could be calculated on the fly for each record knowing only the data in the current record (I work in Delphi where a client dataset has an OnCalcFields event handler, that is triggered for each record when necessary). I have no control over the inner workings of the visual component.

    Read the article

  • CodePlex Daily Summary for Tuesday, December 28, 2010

    CodePlex Daily Summary for Tuesday, December 28, 2010Popular ReleasesMonitorWang: MonitorWang v1.0.5 (Growler): What's new?Added Growl Notification Finalisers - these are interceptor components that work exclusively with the Growl Publisher. These allow you to modify the Growl Notification just prior to it being sent by the publisher. You can inject custom logic to precisely control how the Growl Notification will appear; this includes changing the Growl Priority level and message text. I've created to two Growl Notification Finalisers - one allows you to change the Growl Notification Priorty based on ...Catel - WPF and Silverlight MVVM library: 1.0.0: And there it is, the final release of Catel, and it is no longer a beta version!Multicore Task Framework: MTF 1.0.2: Release 1.0.2 of Multicore Task Framework.EnhSim: EnhSim 2.2.7 ALPHA: 2.2.7 ALPHAThis release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Mongoose has bee...LINQ to Twitter: LINQ to Twitter Beta v2.0.19: Mono 2.8, Silverlight, OAuth, 100% Twitter API coverage, streaming, extensibility via Raw Queries, and added documentation. Bug fixes.Hammock for REST: Hammock v1.1.4: v1.1.4 ChangesOAuth fixes for post content handling, encoding, and url parameter doubling v1.1.3 ChangesAdded an event handler for use with retries Made improvements to OAuth for performance, plus additional fixes Added OAuth token refresh support Fixed memory leak in content streaming, and regression issue with async GET call response handling v1.1.2 ChangesAdded OAuth Echo native support and static helper methods on OAuthCredentials Fixes for multi-part stream writing and recovery...Rocket Framework (.Net 4.0): Rocket Framework for Windows V 1.0.0: Architecture is reviewed and adjusted in a way so that I can introduce the Web version and WPF version of this framework next. - Rocket.Core is introduced - Controller button functions revisited and updated - DB is renewed to suite the implemented features - Create New button functionality is changed - Add Question Handling featuresFxCop Integrator for Visual Studio 2010: FxCop Integrtor 1.1.0: New FeatureSearch violation information FxCop Integrator provides the violation search feature. You can find out specific violation information by simple search expression.Analyze with FxCop project file FxCop Integrator supports code analysis with FxCop project file. You can customize code analysis behavior (e.g. analyze specifid types only, use specific rules only, and so on). ImprovementImproved the code analysis result list to show more information (added Proejct and File column). Change...NoSimplerAccounting: NoSimplerAccounting 6.0: -Fixed a bug in expense category report.NHibernate Mapping Generator: NHibernate Mapping Generator 2.0: Added support for Postgres (Thanks to Angelo)NewLife XCode: XCode v6.5.2010.1223 ????(????v3.5??): XCode v6.5.2010.1223 ????,??: NewLife.Core ??? NewLife.Net ??? XControl ??? XTemplate ????,??C#?????? XAgent ???? NewLife.CommonEnitty ??????(???,XCode??????) XCode?? ?????????,??????????????????,?????95% XCode v3.5.2009.0714 ??,?v3.5?v6.0???????????????,?????????。v3.5???????????,??????????????。 XCoder ??XTemplate?????????,????????XCode??? XCoder_Src ???????(????XTemplate????),??????????????????MiniTwitter: 1.64: MiniTwitter 1.64 ???? ?? 1.63 ??? URL ??????????????VivoSocial: VivoSocial 7.4.0: Please see changes: http://support.vivoware.com/project/ChangeLog.aspx?PROJID=48Umbraco CMS: Umbraco 4.6 Beta - codename JUNO: The Umbraco 4.6 beta (codename JUNO) release contains many new features focusing on an improved installation experience, a number of robust developer features, and contains more than 89 bug fixes since the 4.5.2 release. Improved installer experience Updated Starter Kits (Simple, Blog, Personal, Business) Beautiful, free, customizable skins included Skinning engine and Skin customization (see Skinning Documentation Kit) Default dashboards on install with hide option Updated Login t...SSH.NET Library: 2010.12.23: This release includes some bug fixes and few new fetures. Fixes Allow to retrieve big directory structures ssh-dss algorithm is fixed Populate sftp file attributes New Features Support for passhrase when private key is used Support added for diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha256 and diffie-hellman-group-exchange-sha1 key exchange algorithms Allow to provide multiple key files for authentication Add support for "keyboard-interactive" authentication method...ASP.NET MVC SiteMap provider: MvcSiteMapProvider 2.3.0: Using NuGet?MvcSiteMapProvider is also listed in the NuGet feed. Learn more... Like the project? Consider a donation!Donate via PayPal via PayPal. Release notesThis will be the last release targeting ASP.NET MVC 2 and .NET 3.5. MvcSiteMapProvider 3.0.0 will be targeting ASP.NET MVC 3 and .NET 4 Web.config setting skipAssemblyScanOn has been deprecated in favor of excludeAssembliesForScan and includeAssembliesForScan ISiteMapNodeUrlResolver is now completely responsible for generating th...Media Companion: Media Companion 3.400: Extract the entire archive to a folder which has user access rights, eg desktop, documents etc. A manual is included to get you startedWindows Media Player GNTP Plugin: WMP-GNTP v1.0.5: This is the installer for WMP-GNTP. Install it to get the plugin on your system.Google Geo Kit: Static Google Map WinForm Control Nightly Build: 12/22/2010 MD5sum - b8118c9970d6dc9480fe7c41f042537f add event OnGMapNotDefined. When anything went wrong in StaticGmap internally and return null stream, this event will be firedSilverlight Sockets Sample: No binaries: Whole source code in a ZIP. Shame 'Source Code' tab isn't working, so I'll just upload a ZIP.New ProjectsAsfMojo: AsfMojo is an open source .NET ASF parsing library, providing support for parsing WMA audio and WMV video files. It offers classes to create streams from packet data within a media file, gather file statistics and extract audio segments or frame accurate still frames.asp.net ajax file manager: Features : Full Ajax Support security View Image Create folder Delete file & folder Copy file & folder Move file & folder multiple selection ( Ctrl + Select ) Easy installation and configuration Open source. compatible IE7,IE8,FF,Safari,Chrome http://www.filemanager.3ntar.netBoxNet: BoxNet is a opensource library which will provide possibility for Windows Phone 7 developers to integrate with DropBox.C# Sqlite For WP7: C# Sqlite Port for Windows phone 7 and possibly Silverlight 3, 4. The core engine was slightly modified to be used with IsolatedStorage and SqliteClient were ported by using missing codes from Mono project in order to maximize usability and portability from desktop.Comparison of Managed Compression Algorithms: Visual Studio 2010 Console Test Harness with samples of MiniLZO, QuickLz, SharpZipLib, iRolz and other managed compression classes. Tests compression of a 2MB Word file and shows timings. Useful in deciding what type of compression to use. Developers can easily add their own.Cypher Bot 2011: Cypher Bot 2011 Brings the word cipher to a whole new level. You can now easily open, save, print, and send ciphers. Make a short message or completly encrypt a document. The cipher is impossible to figure out unless you have the keyword and algorithm to solve it! Try it out!Directed Graph for .NET: This project presents a simple directed graph implementation for the .NET framework using C# language.DynamicAccess: DynamicAccess is a library to aid connecting DLR languages such as ironpython and ironruby to non-dynamic languages like managed C++. It also fills in some gaps in the current C# support of dynamic objects, such as member access by string and deletion of members or indexes.Euro for Windows XP: A simple tool and sample to change Estonian currency from Estonian Krone (kr) to Euro (€). Applies to all versions of Windows and from .NET 2.0 which is default build. The sample creates a custom locale and updates existing users through Registry.Excel PowerShell Console: An Excel AddIn to enable using PowerShell for Excel automation.fantastic: fantasticGomarket Toolbar: GoMarket FireFox toolbarjs: jsLiving Agile's Common Framework: This project is a collection of commonly used routines here at Living Agile. There are a lot of helpful extension methods, logging, configuration and threading utilities. All other Living Agile projects have a dependency on this project.MapWindow 4: MapWindow4 is a free windows GIS application and uses an ActiveX control at it's core that can be embedded into many applications that don't support .Net such as excel, access, visual basic 6, or other pre-.Net languages.Mdelete API: Delete all files and directores in windows shell. Support long path (less then 32000 chars) and network path (eg. \\server\share or \\127.0.0.1\share)OP-Code SyntaxEditor: OP-Code SyntaxEditor is a Windows Forms Control, similar to a multi-line TextBox, which syntax highlights text and provides some features for code editing.passion: passionSharpHighlighter: SharpHighlighter is an extension for Visual Studio; a fairly simple code highlighter for C# I made it for anyone who wants to download and learn how to create is own parser or Syntax Highlighter. This sample will highlight only C# classes and structs but it's quite extensible.Test Center Locator: Test center locator makes it easier to find closest toefl or Ielts test center

    Read the article

  • Implementing an automatic navigation mesh generation for 2d top down map?

    - by J2V
    I am currently in the middle of implementing an A* pathfinding for enemies. In order to implement the actual A* logic, I need a navigation mesh for my map. I am working on a 2D top down rpg map. The world is static, meaning there is no requirement for dynamic runtime mesh generation. My world objects are pixel based, not tile based and have associated data with them such as scale, rotation, origin etc. I will obviously need some vertex data being generated from my world objects, maybe create a polygon generation from color data? I could create a colormap with objects for my whole map, but I have no idea how to begin creating nav mesh polygons. How would an actual navigation mesh generation look like with this kind of available information? Can anyone maybe point to some great resources? I have looked into some 3D nav mesh tools, but they seem kind of overly complex for my situation and also have a lot of their req data available from models. Thanks a lot in advance! I have been trying to get my head around it for some time now.

    Read the article

< Previous Page | 707 708 709 710 711 712 713 714 715 716 717 718  | Next Page >