Search Results

Search found 3510 results on 141 pages for 'belongs on meta'.

Page 94/141 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • SQL SERVER – Clustered Index and Primary Key – Contest Win Joes 2 Pros Combo (USD 198) – Day 3 of 5

    - by pinaldave
    August 2011 we ran a contest where every day we give away one book for an entire month. The contest had extreme success. Lots of people participated and lots of give away. I have received lots of questions if we are doing something similar this month. Absolutely, instead of running a contest a month long we are doing something more interesting. We are giving away USD 198 worth gift every day for this week. We are giving away Joes 2 Pros 5 Volumes (BOOK) SQL 2008 Development Certification Training Kit every day. One copy in India and One in USA. Total 2 of the giveaway (worth USD 198). All the gifts are sponsored from the Koenig Training Solution and Joes 2 Pros. The books are available here Amazon | Flipkart | Indiaplaza How to Win: Read the Question Read the Hints Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India residents only) 2 Winners will be randomly selected announced on August 20th. Question of the Day: Which of the following datatype is usually NOT the best choice for Primary Key and Clustered Index? a) INT b) BIGINT c) GUID d) SMALLINT Query Hints: BIG HINT POST The clustered index is the placement order of a table’s records in memory pages. When you insert new records, then each record will be inserted into the memory page in the order it belongs. In the figure below we see another new record (Major Disarray) being inserted, in sequence, between Jonny and Rick. Since there is no room in this memory page, some records will need to shift around. The page split occurs when Irenes’ record moves to the second page. Page splits are considered very bad for performance, and there are a number of techniques to reduce, or even eliminate, the risk of page splits. You can create a clustered index on the table on any field you choose. Sometime SQL will create a clustered index for you. Often times the field having the Primary Key makes a great candidate for the clustered index. Additional Hints: I have previously discussed various concepts from SQL Server Joes 2 Pros Volume 3. SQL Joes 2 Pros Development Series – All about SQL Statistics SQL Joes 2 Pros Development Series – Introduction to Page Split SQL Joes 2 Pros Development Series – The Clustered Index – Simple Understanding SQL Joes 2 Pros Development Series – Geography Data Type – Calculating Distance Between Two Points on the Earth SQL Joes 2 Pros Development Series – Sparse Data and Space Used by Sparse Data SQL Joes 2 Pros Development Series – System and Time Data Types SQL Joes 2 Pros Development Series – Data Row Space Usage and NULL Storage Next Step: Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India) Bonus Winner Leave a comment with your favorite article from the “additional hints” section and you may be eligible for surprise gift. There is no country restriction for this Bonus Contest. Do mention why you liked it any particular blog post and I will announce the winner of the same along with the main contest. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Increasing touch surface (#wp7dev)

    - by Laurent Bugnion
    When you design for Windows Phone 7 (or for any touch device, for that matter, and most especially small screens), you need to be very careful to give enough surface to your users’ fingers. It is easy to miss a touch on such small screens, and that can be horrifyingly frustrating. This is especially true when people are on the move, and trying to hit the control while walking and holding their device in one hand, or when the device is mounted in a car and vibrating with the engine. In my experience, a touch surface should be ideally minimum 60x60 pixels to be easy to activate on the Windows Phone 7 screen (which is, as we know, 800 pixels x 480 pixels). Ideally, I try to make my touch surfaces 80x80 pixels minimum. This causes a few design challenges of course. Using transparent backgrounds However, one thing is helping us tremendously: some surfaces can be made transparent, and yet react to touch. The secret is the following: If you have a panel that has a Null background (i.e. the Background is not set at all), then the empty surface does not react to touch. If however the Background is set to the Transparent color (or any color where the Alpha channel is set to 0), then it will react to touch. Setting a transparent background is easy. For example: <Grid Background="#00000000"> </Grid> or <Grid Background="Transparent"> </Grid> In C#: var grid = new Grid { Background = new SolidColorBrush( Colors.Transparent) }; Using negative margins Having a transparent background reactive to touch is a good start, but in addition, you must make sure that the surface is big enough for my clumsy fingers. One way to achieve that is to increase the transparent, touch-reactive surface, and reposition the element using negative margins. For example, consider the following UI. I changed the transparent background of the HyperlinkButton to Red, in order to visualize the touch surface. In this figure, the Settings HyperlinkButton is 105 pixels x 31 pixels. This is wide enough, but really small in height and easy to miss. To improve this, we can use negative margins, for instance: <HyperlinkButton Content="Settings" HorizontalAlignment="Right" VerticalAlignment="Bottom" Height="60" Margin="0,0,0,-15" /> Notice the usage of negative bottom margin to bring the HyperlinkButton back at the bottom of the main Grid’s first row, where it belongs. And the result is: Notice how the touch surface is much bigger than before. This makes the HyperlinkButton easier to reach, and improves the user experience. With the background set back to normal, the UI looks exactly the same, as it should: In summary: Remember to maximize the touch surface for your controls. Plan your design in consequence by reserving enough room around each control to allow their hit surface to be expanded as shown in this article. Do not cram too many controls in one page. If REALLY needed, use an additional page (or even better: use a Pivot control with multiple pivot items) for the controls that don’t fit on the first one. This should ensure a smoother user experience and improved touch behavior. Happy coding! Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Tomcat deployment overwrites context.xml

    - by Kristoffer
    Hi, I'm pretty new to Tomcat in general, so please point out if got anything wrong. My question is regarding updates to already deployed apps, using the Tomcat manager. But first thing first. I'm using the META-INF/Context.xml for storing connection info for the database connections, so this is unique to every server the application is deployed to. I'm not sure if this is optimal but it's the only way I know. So, when updating the application, it's important that this file doesn't get modified, because I don't want to have to go in and remake all changes every time I update my app. For updating, I'm using the Tomcat Manager, and I've tried different approaches but everything seems to build on the process of undeploy, then deploy the new version. This way, the Context.xml gets removed/replaced by an empty Context.xml file. So my question is basically, how do I update a running webapp, and at the same time having the Context.xml left untouched? Btw, I'm running Tomcat 6.0.24.

    Read the article

  • Encryption setup for Linux NAS?

    - by Daniel
    There's a bazillion hard disk encryption HOWTOs, but somehow I can't find one that actually does what I want. Which is: I have a home NAS running Ubuntu, which is being accessed by a Linux and a Win XP client. (Hopefully MacOS X soon...) I want to setup encryption for home dirs on the NAS so that: It does not interfere with the boot process (since the NAS it tucked away in a cupboard), the home dirs should be accessible as a regular file system on the client(s) (e.g. via SMB), it is easy to use by 'normal' people, (so it does not require SSH-ing to the NAS, mount the encrypted partition on command line, then connecting via SMB, and finally umount the partition after being done. I can't explain that to my mom, or in fact to anyone.) does not store the encryption key the NAS itself, encrypts file meta-data and content (i.e. safe against the 'RIAA' attack, where an intruder should not be able to identify which songs are in your MP3 collection). What I hoped to do was use Samba + PAM. The idea was that on connecting to the SMB server, I'd have to enter the password on the client, which sends it to the server for authentication, which would use the password to mount the encrpytion partition, and would unmount it again when the session was closed. Turns out that doesn't really work, because SMB does not transmit the password in the plain and hence I can't configure PAM to use the incoming password to mount the encrypted patition. So... anything I'm overlooking? Is there any way in which I can use the password entered on the client (e.g. on SMB connect) to initiate mounting the encrypted dir on the server?

    Read the article

  • Tuning Default WorkManager - Advantages and Disadvantages

    - by Murali Veligeti
    Before discussing on Tuning Default WorkManager, lets have a brief introduction on What is Default WorkManger Before Weblogic Server 9.0 release, we had the concept of Execute Queues. WebLogic Server (before WLS 9.0), processing was performed in multiple execute queues. Different classes of work were executed in different queues, based on priority and ordering requirements, and to avoid deadlocks. In addition to the default execute queue, weblogic.kernel.default, there were pre-configured queues dedicated to internal administrative traffic, such as weblogic.admin.HTTP and weblogic.admin.RMI.Users could control thread usage by altering the number of threads in the default queue, or configure custom execute queues to ensure that particular applications had access to a fixed number of execute threads, regardless of overall system load. From WLS 9.0 release onwards WebLogic Server uses is a single thread pool (single thread pool which is called Default WorkManager), in which all types of work are executed. WebLogic Server prioritizes work based on rules you define, and run-time metrics, including the actual time it takes to execute a request and the rate at which requests are entering and leaving the pool.The common thread pool changes its size automatically to maximize throughput. The queue monitors throughput over time and based on history, determines whether to adjust the thread count. For example, if historical throughput statistics indicate that a higher thread count increased throughput, WebLogic increases the thread count. Similarly, if statistics indicate that fewer threads did not reduce throughput, WebLogic decreases the thread count. This new strategy makes it easier for administrators to allocate processing resources and manage performance, avoiding the effort and complexity involved in configuring, monitoring, and tuning custom executes queues. The Default WorkManager is used to handle thread management and perform self-tuning.This Work Manager is used by an application when no other Work Managers are specified in the application’s deployment descriptors. In many situations, the default Work Manager may be sufficient for most application requirements. WebLogic Server’s thread-handling algorithms assign each application its own fair share by default. Applications are given equal priority for threads and are prevented from monopolizing them. The default work-manager, as its name tells, is the work-manager defined by default.Thus, all applications deployed on WLS will use it. But sometimes, when your application is already in production, it's obvious you can't take your EAR / WAR, update the deployment descriptor(s) and redeploy it.The default work-manager belongs to a thread-pool, as initial thread-pool comes with only five threads, that's not much. If your application has to face a large number of hits, you may want to start with more than that.Well, that's quite easy. You have  two option to do so.1) Modify the config.xmlJust add the following line(s) in your server definition : <server> <name>AdminServer</name> <self-tuning-thread-pool-size-min>100</self-tuning-thread-pool-size-min> <self-tuning-thread-pool-size-max>200</self-tuning-thread-pool-size-max> [...] </server> 2) Adding some JVM parameters Add the following system property in setDomainEnv.sh/setDomainEnv.cmd or startWebLogic.sh/startWebLogic.cmd : -Dweblogic.threadpool.MinPoolSize=100 -Dweblogic.threadpool.MaxPoolSize=100 Reboot WLS and see the option has been taken into account . Disadvantage: So far its fine. But here there is an disadvantage in tuning Default WorkManager. Internally Weblogic Server has many work managers configured for different types of work.  if we run out of threads in the self-tuning pool(because of system property -Dweblogic.threadpool.MaxPoolSize) due to being undersized, then important work that WLS might need to do could be starved.  So, while limiting the self-tuning would limit the default WorkManager and internally it also limits all other internal WorkManagers which WLS uses.So the best alternative is to override the default WorkManager that means creating a WorkManager for the Application and assign the WorkManager for the application instead of tuning the Default WorkManager.

    Read the article

  • subdomain .htaccess redirection via ssh remote port forwarding

    - by Achim
    I ask you to help me URL redirecting a subdomain to a SSH remote forwarded port: The current setup is the following: The server A have a local webserver running on port 80. This server is connected to a DSL line or a GPRS connection where the IP address changes often. To prevent a DynDNS setup we established a SSH remote port forwarding to a server B with a static IP adress. This is done on server A by the following statement: ssh -N -p 80 -g -R 10000:localhost:80 tunneling@<Server B IP> So by accessing the new port 10000 of the servers B IP-adress, all traffic is forwarded to the server A port 80 - this works fine! But to offer a more comfortable url to the user I want to hide the server B IP-adress and offer a subdomain. My domain provider allows to add subdomains and redirections to some other servers. In general, this works, I've tested this with different servers. But it don't work if the destination is the port forwarded port of server B. The initial redirection is done, the request is send to server A and the response are forwarded to server B and shown in the browser - fine. But then the URL within the browser is switched away from the subdomain to the IP:port of server B. So the user don't see the subdomain in the URL string of the browser anymore. I've tried this with my providers subdomain redirection, as well as .htaccess redirect, as well as META refresh, the problem always persist. Is there a parameter in the ssh reverse forwarding setup (I guess this is the place where the fix have to be) to keep the typed in subdomain URL and not show the IP. Thanks Achim

    Read the article

  • Why is Safari on my computer rendering all of the colors -- not just for images -- incorrectly?

    - by richardhenry
    I’m not just talking about image color profile issues; every single color that the browser renders is incorrect. It’s like it’s in it’s own color space (or something!). Screenshot: http://drp.ly/DJk1O (Opera, Safari, Chrome, Firefox) Spot the odd one out? Open this up in Photoshop or similar and try using the eyedropper to select the colour. Safari renders the same hex color completely differently. That color is set using a background-color declaration in CSS, so it should be identical in all four of those browsers. Here’s the HTML I was using: <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <title>untitled</title> <style type="text/css"> body { background-color: #114742; } </style> </head> <body> </body> </html> Literally every website I’m viewing with my install of Safari is displaying colors incorrectly. The blue of the bar on Facebook is slightly less rich. This doesn’t occur on any other Macs I’ve tried. Any idea what’s happened to my Safari install?

    Read the article

  • PHP include() through HTTP makes Apache time out

    - by Adam Interact
    I have a problem with ExpressionEngine2 after moving from an old server to WHM/cPanel running on CentOS6.4. Simple test code to reproduce that issue: <?php $protocol = strpos(strtolower($_SERVER['SERVER_PROTOCOL']),'https') === FALSE ? 'http' : 'https'; $host = $_SERVER['HTTP_HOST']; include($protocol . '://' . $host . '/header.html'); ?> <p> Main text...</p> <?php include($protocol . '://' . $host . '/footer.html'); ?> Where header.html looks like <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Untitled Document</title> </head> <body> and footer.html looks like: </body> </html> Creates Apache time out: Warning: include(http://www.domain.com/header.html) [function.include]: failed to open stream: Connection timed out in /home/domain/public_html/test/index.php on line 5 Warning: include() [function.include]: Failed opening 'http://www.domain.com/header.html' for inclusion (include_path='.:/usr/lib/php:/usr/local/lib/php') in /home/domain/public_html/test/index.php on line 5 Main text... Warning: include(http://www.domain.com/footer.html) [function.include]: failed to open stream: Connection timed out in /home/domain/public_html/test/index.php on line 12 Warning: include() [function.include]: Failed opening 'http://www.domain.com/footer.html' for inclusion (include_path='.:/usr/lib/php:/usr/local/lib/php') in /home/domain/public_html/test/index.php on line 12 Any clue what can be wrong with Apache or PHP configuration? Thanks

    Read the article

  • iTerm2 vim cannot map alt key

    - by Eddy
    I'm having trouble trying to map the alt-key bindings on vim in iTerm2. I want to map shortcuts for switching between buffers like this: map <A-Right> <C-w>l map <A-Left> <C-w>h map <A-Down> <C-w>j map <A-Up> <C-w>k But I can't get it to work. I've tried everything, setting the option key as "Normal", "Meta" and "+Esc" in the profile settings. I've tried <M-Right> and <T-Right> but those don't work either. There are posts on superuser and stackoverflow but they use the old version of iTerm2 (v0.x). The only things I've managed to get working are <T-up> and <T-down>, or when I just use Macvim. I'm using iTerm2 v1.0.0.20120203, and Mac OS X 10.7.5 on a Macbook Pro.

    Read the article

  • ArchBeat Link-o-Rama Top 10 for October 21-27, 2012

    - by Bob Rhubart
    The Top 10 most popular items shared on the OTN ArchBeat Facebook Page for the week of October 21-27, 2012. OTN Architect Day: Los Angeles This is your brain on IT architecture. Stuff your cranium with architecture by attending Oracle Technology Network Architect Day in Los Angeles, October 25, 2012, at the Sofitel Los Angeles, 8555 Beverly Boulevard, Los Angeles, CA 90048. Technical sessions, panel Q&A, and peer roundtables—plus a free lunch. [NOTE: The event was last week, of course. Big thanks to the session presenters and especially to those Angelinos who came out for the event.] WebLogic Server 11gR1 Interactive Quick Reference"The WebLogic Server 11gR1 Administration interactive quick reference," explains Juergen Kress, "is a multimedia tool for various terms and concepts used in WebLogic Server architecture. This tool is available for administrators for online or offline use. This is built as a multimedia web page which provides descriptions of WebLogic Server Architectural components, and references to relevant documentation. This tool offers valuable reference information for any complex concept or product in an intuitive and useful manner." Podcast: Are You Future Proof? The latest OTN ArchBeat Podcast series features Oracle ACE Directors Ron Batra, Basheer Khan, and Ronald van Luttikhuizen, three practicing architects in an open discussion about how changes in enterprise IT are raising the bar for success for software architects and developers. Play Oracle Vanquisher Here's a little respite from whatever it is you normally spend your time on. Oracle Vanquisher is an online diversion that makes a game of data center optimization. According to the description: "Armed with a cool Oracle vacuum pack suit and a strategic IT roadmap, you will thwart threats and optimize your data center to increase your company’s stock price and boost your company’s position." Mainly you avoid electric shock and killer birds. The current high score belongs to someone identified as 'TEN." My score? Never mind. Advanced Oracle SOA Suite OOW 2012 PresentationsThe Oracle SOA Product Management team has compiled a complete list of all twelve of their Oracle SOA Suite presentations from Oracle OpenWorld 2012, with links to the slide decks. OAM and OIM 11g Academies Looking for technical how-to content covering Oracle Access Manager and Oracle Identity Manager? The people behind the Oracle Middleware Security blog have indexed relevant blog posts into what they call "Academies." "These indexes contain the articles we’ve written that we believe provide long lasting guidance on OAM and OIM. Posts covered in these series include articles on key aspects of OAM and OIM 11g, best practice architectural guidance, integrations, and customizations." Oracle’s Analytics, Engineered Systems, and Big Data Strategy | Mark Rittman Part 1 of 3 in Oracle ACE Director Mark Rittman's series on Oracle Exalytics, Oracle R Enterprise and Endeca. Oracle ACE Directors Nordic Tour 2012 : Venues and BI Presentations | Mark RittmanOracle ACE Director Mark Rittman shares information on the Oracle ACE Director Tour, as the community leaders make their way through the land of the midnight sun, with events in Copenhagen, Stockholm, Oslo and Helsinki. Following the Thread in OSB | Antony Reynolds Antony Reynolds recently led an Oracle Service Bus POC in which his team needed to get high throughput from an OSB pipeline. "Imagine our surprise when, on stressing the system, we saw it lock up, with large numbers of blocked threads." He shares the details of the problem and the solution in this extensive technical post. OW12: Oracle Business Process Management/Oracle ADF Integration Best Practices | Andrejus Baranovskis The Oracle OpenWorld presentations keep coming! Oracle ACE Director Andrejus Baranovskis shares the slides from "Oracle Business Process Management/Oracle ADF Integration Best Practices," co-presented with Danilo Schmiedel from Opitz Consulting. Thought for the Day "Not everything that can be counted counts, and not everything that counts can be counted." — Albert Einstein (March 14, 1879 – April 18, 1955) Source: Quotes For Software Engineers

    Read the article

  • Linux on HP Envy

    - by Oscar Godson
    OK, the Ubuntu forums aren't helping and I thought maybe you guys here could help. First off, does anyone know the best flavor of Linux to use on an HP Envy? what has the best support out of the box? If not, does anyone know how the hell to get the following to work on Ubuntu 10.04: The touchpad to work at all? Right now, right clicking doesnt work at at all, and left clicks dont work while you have another finger on the pad at all. It jumps all over. ALSO, the multi-touch isn't clickable, but it's for sure a multi-touch touchpad. Works in W7 and can do things like a MBP in W7 The computer feels like it's on fire... i think im missing some driver. Seems odd that the random meta keys like calc, email, brightness, right click, etc work, but not the touchpad? The video card seems fine, but i haven't tested compiz fully yet... Thanks so much to anyone who helps. i want to get back to linux after a couple years on Mac. :)

    Read the article

  • Creating an ec2 image on amazon fails at mkfs.ext3

    - by Dave Orr
    I'm trying to create an image of my ec2 instance in Amazon's cloud. It's been a bit of an adventure so far. I did manage to install Amazon's ec2-api-tools, which was harder than it seemed like it should have been. Then I ran: ec2-bundle-vol -d /mnt -k pk-{key}.pem -c cert-{cert}.pem -u {uid} -s 1536 Which returned: Copying / into the image file /mnt/image... Excluding: /sys/kernel/debug /sys/kernel/security /sys /proc /dev/pts /dev /dev /media /mnt /proc /sys /etc/udev/rules.d/70-persistent-net.rules /etc/udev/rules.d/z25_persistent-net.rules /mnt/image /mnt/img-mnt 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00677357 s, 155 MB/s mkfs.ext3: option requires an argument -- 'L' Usage: mkfs.ext3 [-c|-l filename] [-b block-size] [-f fragment-size] [-i bytes-per-inode] [-I inode-size] [-J journal-options] [-G meta group size] [-N number-of-inodes] [-m reserved-blocks-percentage] [-o creator-os] [-g blocks-per-group] [-L volume-label] [-M last-mounted-directory] [-O feature[,...]] [-r fs-revision] [-E extended-option[,...]] [-T fs-type] [-U UUID] [-jnqvFKSV] device [blocks-count] ERROR: execution failed: "mkfs.ext3 -F /mnt/image -U 1c001580-9118-4a50-9a25-dcf02be6d25f -L " So mkfs.ext3 wants -L, which is a volume name. But ec2-bundle-vol doesn't seem to take in a volume name as an argument, and the docs (http://docs.amazonwebservices.com/AmazonEC2/gsg/2006-06-26/creating-an-image.html) don't seem to think one should be needed. Certainly their sample command: # ec2-bundle-vol -d /mnt -k ~root/pk-HKZYKTAIG2ECMXYIBH3HXV4ZBZQ55CLO.pem -u 495219933132 -s 1536 doesn't specify anything. So... any help? What am I missing?

    Read the article

  • LVM2 volume group lost

    - by MrG
    I updated one of my servers, but - although I took care not to modify - the volume groups on /dev/sdb1 were lost, although the physical volumes seem to be still there: [root@server ~]# pvscan PV /dev/sda2 VG VolGroup lvm2 [465,16 GiB / 0 free] PV /dev/sdb1 lvm2 [1,82 TiB] Total: 2 [2,27 TiB] / in use: 1 [465,16 GiB] / in no VG: 1 [1,82 TiB] [root@server ~]# pvs -v Scanning for physical volume names PV VG Fmt Attr PSize PFree DevSize PV UUID /dev/sda2 VolGroup lvm2 a-- 465,16g 0 465,16g HftbaD-MBs0-3p7D-6O13-CrzU-T9Gb-6W0ofB /dev/sdb1 lvm2 a-- 1,82t 1,82t 1,82t dD4XZP-WStA-61xV-5Sff-ifmW-R4rR-JenHoU [root@server ~]# pvck -d -v /dev/sdb1 Scanning /dev/sdb1 Found label on /dev/sdb1, sector 1, type=LVM2 001 Found text metadata area: offset=4096, size=1044480 Found LVM2 metadata record at offset=10752, size=1037824, offset2=0 size2=0 Found LVM2 metadata record at offset=9216, size=1536, offset2=0 size2=0 Found LVM2 metadata record at offset=7168, size=2048, offset2=0 size2=0 Found LVM2 metadata record at offset=5632, size=1536, offset2=0 size2=0 I attempted to fix it as described here and was able to extract the 4 meta data sets listed above (using i.e. dd bs=1 skip=5632 count=1536 if=/dev/sdb1 of=output.file), none of them includes the lv_data which I'm missing. Please advise how I could access the files which should be on /dev/sdb1 there. Any help is appreciated!

    Read the article

  • FBX Importer - Texture Name

    - by CmasterG
    I have a problem with the FBX SDK. I read in the data for the vertex position and the uv coordinates. It works fine, but now I want to read for each polygon to which texture it belongs, so that I can have models with multiple textures. Can anyone tell me how I can get the texture name (file name) for my polygon. My code to read in vertex position and uv coordinates is the following: int i, j, lPolygonCount = pMesh->GetPolygonCount(); FbxVector4* lControlPoints = pMesh->GetControlPoints(); int vertexId = 0; for (i = 0; i < lPolygonCount; i++) { int lPolygonSize = pMesh->GetPolygonSize(i); for (j = 0; j < lPolygonSize; j++) { int lControlPointIndex = pMesh->GetPolygonVertex(i, j); FbxVector4 pos = lControlPoints[lControlPointIndex]; current_model[vertex_index].x = pos.mData[0] - pivot_offset[0]; current_model[vertex_index].y = pos.mData[1] - pivot_offset[1]; current_model[vertex_index].z = pos.mData[2]- pivot_offset[2]; FbxVector4 vertex_normal; pMesh->GetPolygonVertexNormal(i,j, vertex_normal); current_model[vertex_index].nx = vertex_normal.mData[0]; current_model[vertex_index].ny = vertex_normal.mData[1]; current_model[vertex_index].nz = vertex_normal.mData[2]; //read in UV data FbxStringList lUVSetNameList; pMesh->GetUVSetNames(lUVSetNameList); //get lUVSetIndex-th uv set const char* lUVSetName = lUVSetNameList.GetStringAt(0); const FbxGeometryElementUV* lUVElement = pMesh->GetElementUV(lUVSetName); if(!lUVElement) continue; // only support mapping mode eByPolygonVertex and eByControlPoint if( lUVElement->GetMappingMode() != FbxGeometryElement::eByPolygonVertex && lUVElement->GetMappingMode() != FbxGeometryElement::eByControlPoint ) return; //index array, where holds the index referenced to the uv data const bool lUseIndex = lUVElement->GetReferenceMode() != FbxGeometryElement::eDirect; const int lIndexCount= (lUseIndex) ? lUVElement->GetIndexArray().GetCount() : 0; FbxVector2 lUVValue; //get the index of the current vertex in control points array int lPolyVertIndex = pMesh->GetPolygonVertex(i,j); //the UV index depends on the reference mode //int lUVIndex = lUseIndex ? lUVElement->GetIndexArray().GetAt(lPolyVertIndex) : lPolyVertIndex; int lUVIndex = pMesh->GetTextureUVIndex(i, j); lUVValue = lUVElement->GetDirectArray().GetAt(lUVIndex); current_model[vertex_index].tu = (float)lUVValue.mData[0]; current_model[vertex_index].tv = (float)lUVValue.mData[1]; vertex_index ++; } } float v1[3], v2[3], v3[3]; v1[0] = current_model[vertex_index - 3].x; v1[1] = current_model[vertex_index - 3].y; v1[2] = current_model[vertex_index - 3].z; v2[0] = current_model[vertex_index - 2].x; v2[1] = current_model[vertex_index - 2].y; v2[2] = current_model[vertex_index - 2].z; v3[0] = current_model[vertex_index - 1].x; v3[1] = current_model[vertex_index - 1].y; v3[2] = current_model[vertex_index - 1].z; collision_model->addTriangle(v1,v2,v3);

    Read the article

  • How do I use key combinations on an axis on a joystick in xorg?

    - by valadil
    I'm using xserver-xorg-input-joystick on Debian Stable so I can use a joystick in place of the mouse. I have mouse movement working correctly, but got stuck trying to add functions for some other keys. These work: #Left stick #Pointer Option "MapAxis1" "mode=relative axis=1.5x" Option "MapAxis2" "mode=relative axis=1.5y" #Right stick #Arrow keys Option "MapAxis4" "mode=relative keylow=Left keyhigh=Right" Option "MapAxis5" "mode=relative keylow=Up keyhigh=Down" But when I try to make key combos (so I can navigate windows and screens in xmonad) I have no luck. #dpad #xmonad focus #up/down toggle window. l/r choose screen. Option "MapAxis8" "mode=relative keylow=Super_L,k keyhigh=Super_L,j" Option "MapAxis7" "mode=relative keylow=Super_L,w keyhigh=Super_L,e" I've also tried Super_R, plain old Super, Meta, and mod4mask, and anything else I can think of. These buttons print the letter, but don't appear to hold down the modifying key. The exception to that is shift. If I specify Shift_L or Shift_R, I get a capital letter. xev indicates that modifier keys are being pressed. If I lower Axis8, I get press Super_L, press k, release k, release Super_L. That looks like it should be working. Maybe this is an xmonad problem and not a joystick driver one? I'm also having trouble with getting an axis to use other XF86 keys: # triggers # song selection Option "MapAxis3" "mode=relative keylow=none keyhigh=XF86AudioForward" Option "MapAxis6" "mode=relative keylow=none keyhigh=XF86AudioBack" That does nothing. Any idea why? If it turns out that this isn't something I can do on an axis, but would work with a button, is there a way to treat my joysticks as buttons? Also, if anyone has suggestions for the other 5 buttons I'll have left after mouse buttons are bound, I'm listening.

    Read the article

  • IE6 does not follow 302 redirect - displays 404 instead

    - by Dexter
    One of our clients has reported that they are experiencing 404 (file not found) errors when attempting to navigate a website that we support. The behaviour only appears to affect her - other users on the same machine can navigate the website fine, but the problem follows her from one PC to another. I've had a good look through the IIS server logs and have identified the requests in question. The normal request pattern is as follows: POST /page.aspx - 80 - ... 401 1 0 POST /page.aspx - 80 DOMAIN/user ... 302 0 0 GET /anotherPage.aspx Request=833f80a5-f34c-4b0e-addb-d73e1ee1663a 80 - ... 401 1 0 GET /anotherPage.aspx Request=833f80a5-f34c-4b0e-addb-d73e1ee1663a 80 DOMAIN/user ... 200 0 However, requests for the affected user do not include a request for the redirected page, nor an entry for the 404, i.e.: POST /page.aspx - 80 - ... 401 1 0 POST /page.aspx - 80 DOMAIN/user ... 302 0 0 ... other unrelated requests Can anyone suggest what might trigger this behaviour, and how I might investigate the cause or prevent it from occuring? I read here that the Allow META refresh option in IE6 might trigger this behaviour, but I have not been able to replicate the behaviour by modifying this setting only.

    Read the article

  • Hijax == sneaky Javascript redirects? Will I get banned from Google?

    - by Chris Jacob
    Question Will I get penalised as "sneaky Javascript redirects" by Google if I have the following Hijax setup (which requires a JavaScript redirect on the page indexed by google). Goal I want to implement Hijax to enable AJAX content to be accessibile to non-JavaScript users and search engine crawlers. Background I'm working on a static file server (GitHub Pages). No server side tricks allowed (so Google's #! "hash bang" solution is not an option). I'm trying to keep my files DRY. I don't want to repeat the common OUTER template in all my files i.e. header, navigation menu, footer, etc They will live in the main index.html Setup the Hijax index.html page contains all OUTER html/css/js... the site's template. index.html has a <div id="content"> which defaults to containing the "homepage" html. index.html has a navigation menu, with a Hijax link to an "about" page. With JavaScript disabled (e.g. crawler) it follows link to /about.html. With JavaScript enabled (e.g. most people) the link updates the url hash fragment to /#about and jQuery replaces the <div id="content"> innerHTML with $("#content").load("about.html #inner-container");. AJAX content about.html does not contain anything extra to try an cloak content for crawlers. about.html file contains enough HTML / CSS / JavaScript to display /about.html as a standalone page with it's own META data... e.g. <html><head><title>About</title>...</head><body></body></html>. about.html has NO OUTER HTML template (i.e. header, navigation menu, footer, etc). about.html <body> contains a <div id="inner-container"> which holds the content that is injected into index.html. about.html has a <noscript> tag as the first child of <body> which explains to non-JavaScript users that they are viewing the about page "inner content" - with a link to navigate to the index.html page to get the full page layout with menu. The (Sneaky?) Redirect Google indexes the /about.html page. However when a person with JavaScript enabled visits that page there is no OUTER html template (e.g. header, navigation menu, footer, etc). So I need to do a JavaScript redirect to get the person over the /#about page (deeplinking to the "about" page "state" in index.html). I'm thinking of doing a "redirect on click or after 10 seconds". The end results is that user ends up on an "enhanced" page back on index.html with all it's OUTER template - but the core "page" content is practically identical. Known issue with inbound links e.g. Share / Bookmarking It seems that if a user shares the URL /#about on their blog, when allocating inbound links to my site Google ignores everything after the # ... it allocates value to the / page - See: http://stackoverflow.com/questions/5028405/hashbang-vs-hijax/5166665#5166665. I can only try an minimise this issue offering "share" buttons on the page with the appropriate urls i.e. /about.html. Duplicate Sorry. I posted this same question over on http://stackoverflow.com/questions/5561686/hijax-sneaky-javascript-redirects-will-i-get-banned-from-google ... then realised it probably belongs more on this Stack Exchange site... Not sure if I should delete the Stack Overflow question? Or just leave it on both sites? Please leave comment.

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

  • How do I setup JBoss 5.1.0.GA to run multiple instances?

    - by djangofan
    Does anyone have any experience or advice in setting up multiple JBoss 5.1.x instances on the same machine that has 1 network card? Here is what I did: Installed JBoss 5.1.0.GA into c:\myjboss 1.5. I copied the server/default directory to server/ports-01 and server/ports-02 so they have their own config. did I assume correctly? Ran .\run.bat -c ports-01 Ran .\run.bat -c ports-02 At this point there are 2 instances but the second instance doesn't load correctly because of what is probably a few port conflicts. For example: the http port ends up being 8080 for both instances, which it gets from line #49 in the C:\myjboss\server\all\conf\bindingservice.beans\META-INF\bindings-jboss-beans.xml file. Earlier in the server load it clearly gets the value from line#63 in that same file. I don't know why it gets part of the port config from line #49 and the other part from line#63. Confused. I also tried: .\run.bat -Djboss.service.binding.set=ports-01 -c ports-01 and it made little difference. Any ideas on what I am doing wrong?

    Read the article

  • Identity in .NET 4.5&ndash;Part 2: Claims Transformation in ASP.NET (Beta 1)

    - by Your DisplayName here!
    In my last post I described how every identity in .NET 4.5 is now claims-based. If you are coming from WIF you might think, great – how do I transform those claims? Sidebar: What is claims transformation? One of the most essential features of WIF (and .NET 4.5) is the ability to transform credentials (or tokens) to claims. During that process the “low level” token details are turned into claims. An example would be a Windows token – it contains things like the name of the user and to which groups he belongs to. That information will be surfaced as claims of type Name and GroupSid. Forms users will be represented as a Name claim (all the other claims that WIF provided for FormsIdentity are gone in 4.5). The issue here is, that your applications typically don’t care about those low level details, but rather about “what’s the purchase limit of alice”. The process of turning the low level claims into application specific ones is called claims transformation. In pre-claims times this would have been done by a combination of Forms Authentication extensibility, role manager and maybe ASP.NET profile. With claims transformation all your identity gathering code is in one place (and the outcome can be cached in a single place as opposed to multiple ones). The structural class to do claims transformation is called ClaimsAuthenticationManager. This class has two purposes – first looking at the incoming (low level) principal and making sure all required information about the user is present. This is your first chance to reject a request. And second – modeling identity information in a way it is relevant for the application (see also here). This class gets called (when present) during the pipeline when using WS-Federation. But not when using the standard .NET principals. I am not sure why – maybe because it is beta 1. Anyhow, a number of people asked me about it, and the following is a little HTTP module that brings that feature back in 4.5. public class ClaimsTransformationHttpModule : IHttpModule {     public void Dispose()     { }     public void Init(HttpApplication context)     {         context.PostAuthenticateRequest += Context_PostAuthenticateRequest;     }     void Context_PostAuthenticateRequest(object sender, EventArgs e)     {         var context = ((HttpApplication)sender).Context;         // no need to call transformation if session already exists         if (FederatedAuthentication.SessionAuthenticationModule != null &&             FederatedAuthentication.SessionAuthenticationModule.ContainsSessionTokenCookie(context.Request.Cookies))         {             return;         }         var transformer = FederatedAuthentication.FederationConfiguration.IdentityConfiguration.ClaimsAuthenticationManager;         if (transformer != null)         {             var transformedPrincipal = transformer.Authenticate(context.Request.RawUrl, context.User as ClaimsPrincipal);             context.User = transformedPrincipal;             Thread.CurrentPrincipal = transformedPrincipal;         }     } } HTH

    Read the article

  • Snapshotting single disk of running Hyper-V VM

    - by modelnine
    I'm currently somewhat at a loss of how to create a snapshot of a single virtual hard-disk of a running Hyper-V VM. Generally, creating a differential disk while a server is shut down is no problem (i.e., call the new-vhd cmdlet and pass a ParentPath, then update the VHD-binding of the respective VM-device), but while the host is running, all I can find is checkpointing the VM as a whole (which creates snapshots of all attached disks), and leaves the VM-state in a form which isn't easily processable by external tools (i.e., it requires reading additional meta-data from the VM). Generally, what'd I'd like to happen for a single-disk snapshot (in my understanding) is: Pause the VM Rename current disk to some other name which specifies it as a base-snapshot Create a new VHD which has the renamed VHD as parent path and is marked as "current" Swap the VHD for the VM for the snapshotted hard-disk to the newly created differential VHD Resume the VM Is there any means to do this programatically? Update: I've seen that this is actually possible with SCSI-disks, i.e. pause the VM, remove the SCSI disk, make the snapshot, reattach the SCSI disk at the same position, resume the VM. And, the VM resumes properly. But: is something similar also possible with G1 machines for the boot disk which is always IDE?

    Read the article

  • Distributed storage and computing

    - by Tim van Elteren
    Dear Serverfault community, After researching a number of distributed file systems for deployment in a production environment with the main purpose of performing both batch and real-time distributed computing I've identified the following list as potential candidates, mainly on maturity, license and support: Ceph Lustre GlusterFS HDFS FhGFS MooseFS XtreemFS The key properties that our system should exhibit: an open source, liberally licensed, yet production ready, e.g. a mature, reliable, community and commercially supported solution; ability to run on commodity hardware, preferably be designed for it; provide high availability of the data with the most focus on reads; high scalability, so operation over multiple data centres, possibly on a global scale; removal of single points of failure with the use of replication and distribution of (meta-)data, e.g. provide fault-tolerance. The sensitivity points that were identified, and resulted in the following questions, are: transparency to the processing layer / application with respect to data locality, e.g. know where data is physically located on a server level, mainly for resource allocation and fast processing, high performance, how can this be accomplished? Do you from experience know what solutions provide this transparency and to what extent? posix compliance, or conformance, is mentioned on the wiki pages of most of the above listed solutions. The question here mainly is, how relevant is support for the posix standard? Hadoop for example isn't posix compliant by design, what are the pro's and con's? what about the difference between synchronous and asynchronous opeartion of a distributed file system. Though a synchronous distributed file system has the preference because of reliability it also imposes certain limitations with respect to scalability. What would be, from your expertise, the way to go on this? I'm looking forward to your replies. Thanks in advance! :) With kind regards, Tim van Elteren

    Read the article

  • Spotlight on an office – Utrecht

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} This time in our monthly topic, we have our spotlight on the brand new Oracle office in Utrecht, the Netherlands. About 35km south-east of Schiphol Airport and centrally located in the Netherlands, Oracle moved into the Facet building in March 2011. Facet is much more than an office building, it creates a work environment that relates to the ‘No Limits’ philosophy Oracle has in the Netherlands. “No Limits” means the building belongs to everyone. You choose the best place to work, based on the activities of that moment. To point this out, we currently have 1050 people working for Oracle Netherlands, and 623 workplaces. There is virtually no limit to where you can sit in our shiny new offices; we no longer have 'zoning', where departments own specific areas in the building, Even the Managing Director of Oracle Netherlands, does not have an office and he chooses a different working place every day. So make sure you are prepared when he is sitting next to you one day! If nobody has a fixed workplace, then you would think that finding a colleague could be tricky. Oracle uses CU (‘SeeYou’) which makes all of us easier to locate. Upon entering the building you receive a text stating where the greatest concentration of your buddies is sitting. Our internal messaging service also proves to be very valuable finding your colleagues. The heart of our building is the great RestOrant, with a very busy coffee bar. It offers an informal place for people to meet and is busy all day, not just at lunch time! The O-Bar in the atrium on the ground floor is also a very popular place to meet and drink tea or coffee and gives a breathtaking introduction to the office to any of our first time visitors. For a few minutes of relaxation during the working day, there are table tennis facilities and a Wii room on every floor! So if you are interested in joining Oracle in this Netherlands or anywhere else in EMEA, please have a look at http://campus.oracle.com for all of our latest vacancies and internships.

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

  • Map a URL bought with Dreamhost to Amazon EC2 (AWS)

    - by Edan Maor
    I have several URLs I purchased through Dreamhost. I'm starting to use Amazon's AWS, and I'd like to map the URLs to Amazon. This is something of a silly question, and I've already done the same thing several times to other services (mapping from Dreamhost to WebFaction). But for some reason when I tried to find the proper way to do the same mapping to Amazon, I find a lot of detailed writing talking about whether I should be using CNAME or A records, etc. So I wanted to ask in the simplest possible terms and hopefully get a simple, concrete answer: I bought a URL from Dreamhost, I have an EC2 server running on AWS (to which I already mapped an Elastic IP address). How do I make the URL map to AWS? And if there are several options, which one should I effectively be using? P.S. Meta-question - why are things so much more difficult with AWS? When I search Google for "Move from Dreamhost to WebFaction, I get very simple answers on how to do the mapping. In what way is AWS different?

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >