Search Results

Search found 17160 results on 687 pages for 'built in types'.

Page 78/687 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • Installing vim7.2 on Solaris Sparc 2.6 as non-root

    - by Tobbe
    I'm trying to install vim to $HOME/bin by compiling the sources. ./configure --prefix=$home/bin seems to work, but when running make I get: > make Starting make in the src directory. If there are problems, cd to the src directory and run make there cd src && make first gcc -c -I. -Iproto -DHAVE_CONFIG_H -DFEAT_GUI_GTK -I/usr/include/gtk-2.0 -I/usr/lib/gtk-2.0/include -I/usr/include/atk-1.0 -I/usr/include/pango-1.0 -I/usr/openwin/include -I/usr/sfw/include -I/usr/sfw/include/freetype2 -I/usr/include/glib-2.0 -I/usr/lib/glib-2.0/include -g -O2 -I/usr/openwin/include -o objects/buffer.o buffer.c In file included from buffer.c:28: vim.h:41: error: syntax error before ':' token In file included from os_unix.h:29, from vim.h:245, from buffer.c:28: /usr/include/sys/stat.h:251: error: syntax error before "blksize_t" /usr/include/sys/stat.h:255: error: syntax error before '}' token /usr/include/sys/stat.h:309: error: syntax error before "blksize_t" /usr/include/sys/stat.h:310: error: conflicting types for 'st_blocks' /usr/include/sys/stat.h:252: error: previous declaration of 'st_blocks' was here /usr/include/sys/stat.h:313: error: syntax error before '}' token In file included from /opt/local/bin/../lib/gcc/sparc-sun-solaris2.6/3.4.6/include/sys/signal.h:132, from /usr/include/signal.h:26, from os_unix.h:163, from vim.h:245, from buffer.c:28: /usr/include/sys/siginfo.h:259: error: syntax error before "ctid_t" /usr/include/sys/siginfo.h:292: error: syntax error before '}' token /usr/include/sys/siginfo.h:294: error: syntax error before '}' token /usr/include/sys/siginfo.h:390: error: syntax error before "ctid_t" /usr/include/sys/siginfo.h:398: error: conflicting types for '__fault' /usr/include/sys/siginfo.h:267: error: previous declaration of '__fault' was here /usr/include/sys/siginfo.h:404: error: conflicting types for '__file' /usr/include/sys/siginfo.h:273: error: previous declaration of '__file' was here /usr/include/sys/siginfo.h:420: error: conflicting types for '__prof' /usr/include/sys/siginfo.h:287: error: previous declaration of '__prof' was here /usr/include/sys/siginfo.h:424: error: conflicting types for '__rctl' /usr/include/sys/siginfo.h:291: error: previous declaration of '__rctl' was here /usr/include/sys/siginfo.h:426: error: syntax error before '}' token /usr/include/sys/siginfo.h:428: error: syntax error before '}' token /usr/include/sys/siginfo.h:432: error: syntax error before "k_siginfo_t" /usr/include/sys/siginfo.h:437: error: syntax error before '}' token In file included from /usr/include/signal.h:26, from os_unix.h:163, from vim.h:245, from buffer.c:28: /opt/local/bin/../lib/gcc/sparc-sun-solaris2.6/3.4.6/include/sys/signal.h:173: error: syntax error before "siginfo_t" In file included from os_unix.h:163, from vim.h:245, from buffer.c:28: /usr/include/signal.h:111: error: syntax error before "siginfo_t" /usr/include/signal.h:113: error: syntax error before "siginfo_t" buffer.c: In function `buflist_new': buffer.c:1502: error: storage size of 'st' isn't known buffer.c: In function `buflist_findname': buffer.c:1989: error: storage size of 'st' isn't known buffer.c: In function `setfname': buffer.c:2578: error: storage size of 'st' isn't known buffer.c: In function `otherfile_buf': buffer.c:2836: error: storage size of 'st' isn't known buffer.c: In function `buf_setino': buffer.c:2874: error: storage size of 'st' isn't known buffer.c: In function `buf_same_ino': buffer.c:2894: error: dereferencing pointer to incomplete type buffer.c:2895: error: dereferencing pointer to incomplete type *** Error code 1 make: Fatal error: Command failed for target `objects/buffer.o' Current working directory /home/xluntor/vim72/src *** Error code 1 make: Fatal error: Command failed for target `first' How do I fix the make errors? Or is there another way to install vim as non-root? Thanks in advance

    Read the article

  • compile lastest bnx2 driver on debian squeeze

    - by markus
    I want to upgrade the bnx2 network card driver in a Dell Power Edge R410. I downloaded the latest driver version from the broadcom website. If I want to compile the driver it fails with the following errors: make make -C bnx2/src KVER=2.6.32-5-amd64 PREFIX= make[1]: Entering directory `/tmp/netxtreme2-6.2.23/bnx2-2.0.23b/src' make -C /lib/modules/2.6.32-5-amd64/build SUBDIRS=/tmp/netxtreme2-6.2.23/bnx2-2.0.23b/src modules make[2]: Entering directory `/usr/src/linux-headers-2.6.32-5-amd64' Building modules, stage 2. MODPOST 2 modules make[2]: Leaving directory `/usr/src/linux-headers-2.6.32-5-amd64' make[1]: Leaving directory `/tmp/netxtreme2-6.2.23/bnx2-2.0.23b/src' make -C bnx2x/src KVER=2.6.32-5-amd64 PREFIX= make[1]: Entering directory `/tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src' make -C /lib/modules/2.6.32-5-amd64/build M=`pwd` modules make[2]: Entering directory `/usr/src/linux-headers-2.6.32-5-amd64' CC [M] /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_main.o In file included from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x.h:68, from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_main.c:80: /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_compat.h:1009:1: error: "PCI_VPD_LRDT_ID_STRING" redefined In file included from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_main.c:34: /usr/src/linux-headers-2.6.32-5-common/include/linux/pci.h:1327:1: error: this is the location of the previous definition In file included from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x.h:68, from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_main.c:80: /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_compat.h:1011:1: error: "PCI_VPD_LRDT_RO_DATA" redefined In file included from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_main.c:34: /usr/src/linux-headers-2.6.32-5-common/include/linux/pci.h:1328:1: error: this is the location of the previous definition In file included from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x.h:68, from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_main.c:80: /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_compat.h:1013:1: error: "PCI_VPD_LRDT_RW_DATA" redefined In file included from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_main.c:34: /usr/src/linux-headers-2.6.32-5-common/include/linux/pci.h:1329:1: error: this is the location of the previous definition In file included from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x.h:68, from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_main.c:80: /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_compat.h:1019:1: error: "PCI_VPD_SRDT_END" redefined In file included from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_main.c:34: /usr/src/linux-headers-2.6.32-5-common/include/linux/pci.h:1334:1: error: this is the location of the previous definition In file included from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x.h:68, from /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_main.c:80: /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_compat.h:1032: error: conflicting types for ‘pci_vpd_lrdt_size’ /usr/src/linux-headers-2.6.32-5-common/include/linux/pci.h:1355: error: previous definition of ‘pci_vpd_lrdt_size’ was here /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_compat.h:1037: error: conflicting types for ‘pci_vpd_srdt_size’ /usr/src/linux-headers-2.6.32-5-common/include/linux/pci.h:1366: error: previous definition of ‘pci_vpd_srdt_size’ was here /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_compat.h:1042: error: conflicting types for ‘pci_vpd_find_tag’ /usr/src/linux-headers-2.6.32-5-common/include/linux/pci.h:1391: error: previous declaration of ‘pci_vpd_find_tag’ was here /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_compat.h:1077: error: conflicting types for ‘pci_vpd_info_field_size’ /usr/src/linux-headers-2.6.32-5-common/include/linux/pci.h:1377: error: previous definition of ‘pci_vpd_info_field_size’ was here /tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_compat.h:1082: error: conflicting types for ‘pci_vpd_find_info_keyword’ /usr/src/linux-headers-2.6.32-5-common/include/linux/pci.h:1403: error: previous declaration of ‘pci_vpd_find_info_keyword’ was here make[5]: *** [/tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src/bnx2x_main.o] Fehler 1 make[4]: *** [_module_/tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src] Fehler 2 make[3]: *** [sub-make] Fehler 2 make[2]: *** [all] Fehler 2 make[2]: Leaving directory `/usr/src/linux-headers-2.6.32-5-amd64' make[1]: *** [bnx2x.o] Fehler 2 make[1]: Leaving directory `/tmp/netxtreme2-6.2.23/bnx2x-1.62.15/src' make: *** [l2build] Fehler 2

    Read the article

  • Installing vim7 on Solaris Sparc 2.6 as non-root

    - by Tobbe
    I'm trying to install vim to $HOME/bin by compiling the sources. ./configure --prefix=$home/bin seems to work, but when running make I get: > make Starting make in the src directory. If there are problems, cd to the src directory and run make there cd src && make first gcc -c -I. -Iproto -DHAVE_CONFIG_H -DFEAT_GUI_GTK -I/usr/include/gtk-2.0 -I/usr/lib/gtk-2.0/include -I/usr/include/atk-1.0 -I/usr/include/pango-1.0 -I/usr/openwin/include -I/usr/sfw/include -I/usr/sfw/include/freetype2 -I/usr/include/glib-2.0 -I/usr/lib/glib-2.0/include -g -O2 -I/usr/openwin/include -o objects/buffer.o buffer.c In file included from buffer.c:28: vim.h:41: error: syntax error before ':' token In file included from os_unix.h:29, from vim.h:245, from buffer.c:28: /usr/include/sys/stat.h:251: error: syntax error before "blksize_t" /usr/include/sys/stat.h:255: error: syntax error before '}' token /usr/include/sys/stat.h:309: error: syntax error before "blksize_t" /usr/include/sys/stat.h:310: error: conflicting types for 'st_blocks' /usr/include/sys/stat.h:252: error: previous declaration of 'st_blocks' was here /usr/include/sys/stat.h:313: error: syntax error before '}' token In file included from /opt/local/bin/../lib/gcc/sparc-sun-solaris2.6/3.4.6/include/sys/signal.h:132, from /usr/include/signal.h:26, from os_unix.h:163, from vim.h:245, from buffer.c:28: /usr/include/sys/siginfo.h:259: error: syntax error before "ctid_t" /usr/include/sys/siginfo.h:292: error: syntax error before '}' token /usr/include/sys/siginfo.h:294: error: syntax error before '}' token /usr/include/sys/siginfo.h:390: error: syntax error before "ctid_t" /usr/include/sys/siginfo.h:398: error: conflicting types for '__fault' /usr/include/sys/siginfo.h:267: error: previous declaration of '__fault' was here /usr/include/sys/siginfo.h:404: error: conflicting types for '__file' /usr/include/sys/siginfo.h:273: error: previous declaration of '__file' was here /usr/include/sys/siginfo.h:420: error: conflicting types for '__prof' /usr/include/sys/siginfo.h:287: error: previous declaration of '__prof' was here /usr/include/sys/siginfo.h:424: error: conflicting types for '__rctl' /usr/include/sys/siginfo.h:291: error: previous declaration of '__rctl' was here /usr/include/sys/siginfo.h:426: error: syntax error before '}' token /usr/include/sys/siginfo.h:428: error: syntax error before '}' token /usr/include/sys/siginfo.h:432: error: syntax error before "k_siginfo_t" /usr/include/sys/siginfo.h:437: error: syntax error before '}' token In file included from /usr/include/signal.h:26, from os_unix.h:163, from vim.h:245, from buffer.c:28: /opt/local/bin/../lib/gcc/sparc-sun-solaris2.6/3.4.6/include/sys/signal.h:173: error: syntax error before "siginfo_t" In file included from os_unix.h:163, from vim.h:245, from buffer.c:28: /usr/include/signal.h:111: error: syntax error before "siginfo_t" /usr/include/signal.h:113: error: syntax error before "siginfo_t" buffer.c: In function `buflist_new': buffer.c:1502: error: storage size of 'st' isn't known buffer.c: In function `buflist_findname': buffer.c:1989: error: storage size of 'st' isn't known buffer.c: In function `setfname': buffer.c:2578: error: storage size of 'st' isn't known buffer.c: In function `otherfile_buf': buffer.c:2836: error: storage size of 'st' isn't known buffer.c: In function `buf_setino': buffer.c:2874: error: storage size of 'st' isn't known buffer.c: In function `buf_same_ino': buffer.c:2894: error: dereferencing pointer to incomplete type buffer.c:2895: error: dereferencing pointer to incomplete type *** Error code 1 make: Fatal error: Command failed for target `objects/buffer.o' Current working directory /home/xluntor/vim72/src *** Error code 1 make: Fatal error: Command failed for target `first' How do I fix the make errors? Or is there another way to install vim as non-root? Thanks in advance

    Read the article

  • June 26th Links: ASP.NET, ASP.NET MVC, .NET and NuGet

    - by ScottGu
    Here is the latest in my link-listing series.  Also check out my Best of 2010 Summary for links to 100+ other posts I’ve done in the last year. [I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] ASP.NET Introducing new ASP.NET Universal Providers: Great post from Scott Hanselman on the new System.Web.Providers we are working on.  This release delivers new ASP.NET Membership, Role Management, Session, Profile providers that work with SQL Server, SQL CE and SQL Azure. CSS Sprites and the ASP.NET Sprite and Image Optimization Library: Great post from Scott Mitchell that talks about a free library for ASP.NET that you can use to optimize your CSS and images to reduce HTTP requests and speed up your site. Better HTML5 Support for the VS 2010 Editor: Another great post from Scott Hanselman on an update several people on my team did that enables richer HTML5 editing support within Visual Studio 2010. Install the Ajax Control Toolkit from NuGet: Nice post by Stephen Walther on how you can now use NuGet to install the Ajax Control Toolkit within your applications.  This makes it much easier to reference and use. May 2011 Release of the Ajax Control Toolkit: Another great post from Stephen Walther that talks about the May release of the Ajax Control Toolkit. It includes a bunch of nice enhancements and fixes. SassAndCoffee 0.9 Released: Paul Betts blogs about the latest release of his SassAndCoffee extension (available via NuGet). It enables you to easily use Sass and Coffeescript within your ASP.NET applications (both MVC and Webforms). ASP.NET MVC ASP.NET MVC Mini-Profiler: The folks at StackOverflow.com (a great site built with ASP.NET MVC) have released a nice (free) profiler they’ve built that enables you to easily profile your ASP.NET MVC 3 sites and tune them for performance.  Globalization, Internationalization and Localization in ASP.NET MVC 3: Great post from Scott Hanselman on how to enable internationalization, globalization and localization support within your ASP.NET MVC 3 and jQuery solutions. Precompile your MVC Razor Views: Great post from David Ebbo that discusses a new Razor Generator tool that enables you to pre-compile your razor view templates as assemblies – which enables a bunch of cool scenarios. Unit Testing Razor Views: Nice post from David Ebbo that shows how to use his new Razor Generator to enable unit testing of razor view templates with ASP.NET MVC. Bin Deploying ASP.NET MVC 3: Nice post by Phil Haack that covers a cool feature added to VS 2010 SP1 that makes it really easy to \bin deploy ASP.NET MVC and Razor within your application. This enables you to easily deploy the app to servers that don’t have ASP.NET MVC 3 installed. .NET Table Splitting with EF 4.1 Code First: Great post from Morteza Manavi that discusses how to split up a single database table across multiple EF entity classes.  This shows off some of the power behind EF 4.1 and is very useful when working with legacy database schemas. Choosing the Right Collection Class: Nice post from James Michael Hare that talks about the different collection class options available within .NET.  A nice overview for people who haven’t looked at all of the support now built into the framework. Little Wonders: Empty(), DefaultIfEmpty() and Count() helper methods: Another in James Michael Hare’s excellent series on .NET/C# “Little Wonders”.  This post covers some of the great helper methods now built-into .NET that make coding even easier. NuGet NuGet 1.4 Released: Learn all about the latest release of NuGet – which includes a bunch of cool new capabilities.  It takes only seconds to update to it – go for it! NuGet in Depth: Nice presentation from Scott Hanselman all about NuGet and some of the investments we are making to enable a better open source ecosystem within .NET. NuGet for the Enterprise – NuGet in a Continuous Integration Automated Build System: Great post from Scott Hanselman on how to integrate NuGet within enterprise build environments and enable it with CI solutions. Hope this helps, Scott

    Read the article

  • What I don&rsquo;t like about WIF&rsquo;s Claims-based Authorization

    - by Your DisplayName here!
    In my last post I wrote about what I like about WIF’s proposed approach to authorization – I also said that I definitely would build upon that infrastructure for my own systems. But implementing such a system is a little harder as it could be. Here’s why (and that’s purely my perspective): First of all WIF’s authorization comes in two “modes” Per-request authorization. When an ASP.NET/WCF request comes in, the registered authorization manager gets called. For SOAP the SOAP action gets passed in. For HTTP requests (ASP.NET, WCF REST) the URL and verb. Imperative authorization This happens when you explicitly call the claims authorization API from within your code. There you have full control over the values for action and resource. In ASP.NET per-request authorization is optional (depends on if you have added the ClaimsAuthorizationHttpModule). In WCF you always get the per-request checks as soon as you register the authorization manager in configuration. I personally prefer the imperative authorization because first of all I don’t believe in URL based authorization. Especially in the times of MVC and routing tables, URLs can be easily changed – but then you also have to adjust your authorization logic every time. Also – you typically need more knowledge than a simple “if user x is allowed to invoke operation x”. One problem I have is, both the per-request calls as well as the standard WIF imperative authorization APIs wrap actions and resources in the same claim type. This makes it hard to distinguish between the two authorization modes in your authorization manager. But you typically need that feature to structure your authorization policy evaluation in a clean way. The second problem (which is somehow related to the first one) is the standard API for interacting with the claims authorization manager. The API comes as an attribute (ClaimsPrincipalPermissionAttribute) as well as a class to use programmatically (ClaimsPrincipalPermission). Both only allow to pass in simple strings (which results in the wrapping with standard claim types mentioned earlier). Both throw a SecurityException when the check fails. The attribute is a code access permission attribute (like PrincipalPermission). That means it will always be invoked regardless how you call the code. This may be exactly what you want, or not. In a unit testing situation (like an MVC controller) you typically want to test the logic in the function – not the security check. The good news is, the WIF API is flexible enough that you can build your own infrastructure around their core. For my own projects I implemented the following extensions: A way to invoke the registered claims authorization manager with more overloads, e.g. with different claim types or a complete AuthorizationContext. A new CAS attribute (with the same calling semantics as the built-in one) with custom claim types. A MVC authorization attribute with custom claim types. A way to use branching – as opposed to catching a SecurityException. I will post the code for these various extensions here – so stay tuned.

    Read the article

  • Few events I&rsquo;m speaking at in early 2013

    - by Mladen Prajdic
    2013 has started great and the SQL community is already brimming with events. At some of these events you can come say hi. I’ll be glad you do! These are the events with dates and locations that I know I’ll be speaking at so far.   February 16th: SQL Saturday #198 - Vancouver, Canada The session I’ll present in Vancouver is SQL Impossible: Restoring/Undeleting a table Yes, you read the title right. No, it's not about the usual "one table per partition" and "restore full backup then copy the data over" methods. No, there are no 3rd party tools involved. Just you and your SQL Server. Yes, it's crazy. No, it's not for production purposes. And yes, that's why it's so much fun. Prepare to dive into the world of data pages, log records, deletes, truncates and backups and how it all works together to get your table back from the endless void. Want to know more? Come and see! This is an advanced level session where we’ll dive into the internals of data pages, transaction log records and page restores.   March 8th-9th: SQL Saturday #194 - Exeter, UK In Exeter I’ll be presenting twice. On the first day I’ll have a full day precon titled: From SQL Traces to Extended Events - The next big switch This pre-con will give you insight into both of the current tracing technologies in SQL Server. The old SQL Trace which has served us well over the past 10 or so years is on its way out because the overhead and details it produces are no longer enough to deal with today's loads. The new Extended Events are a new lightweight tracing mechanism built directly into the SQLOS thus giving us information SQL Trace just couldn't. They were designed and built with performance in mind and it shows. The new Extended Events are a new lightweight tracing mechanism built directly into the SQLOS thus giving us information SQL Trace just couldn't. They were designed and built with performance in mind and it shows. Mastering Extended Events requires learning at least one new skill: XML querying. The second session I’ll have on Saturday titled: SQL Injection from website to SQL Server SQL Injection is still one of the biggest reasons various websites and applications get hacked. The solution as everyone tells us is simple. Use SQL parameters. But is that enough? In this session we'll look at how would an attacker go about using SQL Injection to gain access to your database, see its schema and data, take over the server, upload files and do various other mischief on your domain. This is a fun session that always brings out a few laughs in the audience because they didn’t realize what can be done.   April 23rd-25th: NTK conference - Bled, Slovenia (Slovenian website only) This is a conference with history. This year marks its 18th year running. It’s a relatively large IT conference that focuses on various Microsoft technologies like .Net, Azure, SQL Server, Exchange, Security, etc… The main session’s language is Slovenian but this is slowly changing so it’s becoming more interesting for foreign attendees. This year it’s happening in the beautiful town of Bled in the Alps. The scenery alone is worth the visit, wouldn’t you agree? And this year there are quite a few well known speakers present! Session title isn’t known yet.       May 2nd-4th: SQL Bits XI – Nottingham, UK SQL Bits is the largest SQL Server conference in Europe. It’s a 3 day conference with top speakers and content all dedicated to SQL Server. The session I’ll present here is an hour long version of the precon I’ll give in Exeter. From SQL Traces to Extended Events - The next big switch The session description is the same as for the Exeter precon but we'll focus more on how the Extended Events work with only a brief overview of old SQL Trace architecture.

    Read the article

  • Does my use of the strategy pattern violate the fundamental MVC pattern in iOS?

    - by Goodsquirrel
    I'm about to use the 'strategy' pattern in my iOS app, but feel like my approach violates the somehow fundamental MVC pattern. My app is displaying visual "stories", and a Story consists (i.e. has @properties) of one Photo and one or more VisualEvent objects to represent e.g. animated circles or moving arrows on the photo. Each VisualEvent object therefore has a eventType @property, that might be e.g. kEventTypeCircle or kEventTypeArrow. All events have things in common, like a startTime @property, but differ in the way they are being drawn on the StoryPlayerView. Currently I'm trying to follow the MVC pattern and have a StoryPlayer object (my controller) that knows about both the model objects (like Story and all kinds of visual events) and the view object StoryPlayerView. To chose the right drawing code for each of the different visual event types, my StoryPlayer is using a switch statement. @implementation StoryPlayer // (...) - (void)showVisualEvent:(VisualEvent *)event onStoryPlayerView:storyPlayerView { switch (event.eventType) { case kEventTypeCircle: [self showCircleEvent:event onStoryPlayerView:storyPlayerView]; break; case kEventTypeArrow: [self showArrowDrawingEvent:event onStoryPlayerView:storyPlayerView]; break; // (...) } But switch statements for type checking are bad design, aren't they? According to Uncle Bob they lead to tight coupling and can and should almost always be replaced by polymorphism. Having read about the "Strategy"-Pattern in Head First Design Patterns, I felt this was a great way to get rid of my switch statement. So I changed the design like this: All specialized visual event types are now subclasses of an abstract VisualEvent class that has a showOnStoryPlayerView: method. @interface VisualEvent : NSObject - (void)showOnStoryPlayerView:(StoryPlayerView *)storyPlayerView; // abstract Each and every concrete subclass implements a concrete specialized version of this drawing behavior method. @implementation CircleVisualEvent - (void)showOnStoryPlayerView:(StoryPlayerView *)storyPlayerView { [storyPlayerView drawCircleAtPoint:self.position color:self.color lineWidth:self.lineWidth radius:self.radius]; } The StoryPlayer now simply calls the same method on all types of events. @implementation StoryPlayer - (void)showVisualEvent:(VisualEvent *)event onStoryPlayerView:storyPlayerView { [event showOnStoryPlayerView:storyPlayerView]; } The result seems to be great: I got rid of the switch statement, and if I ever have to add new types of VisualEvents in the future, I simply create new subclasses of VisualEvent. And I won't have to change anything in StoryPlayer. But of cause this approach violates the MVC pattern since now my model has to know about and depend on my view! Now my controller talks to my model and my model talks to the view calling methods on StoryPlayerView like drawCircleAtPoint:color:lineWidth:radius:. But this kind of calls should be controller code not model code, right?? Seems to me like I made things worse. I'm confused! Am I completely missing the point of the strategy pattern? Is there a better way to get rid of the switch statement without breaking model-view separation?

    Read the article

  • Unleash the Power of Cryptography on SPARC T4

    - by B.Koch
    by Rob Ludeman Oracle’s SPARC T4 systems are architected to deliver enhanced value for customer via the inclusion of many integrated features.  One of the best examples of this approach is demonstrated in the on-chip cryptographic support that delivers wire speed encryption capabilities without any impact to application performance.  The Evolution of SPARC Encryption SPARC T-Series systems have a long history of providing this capability, dating back to the release of the first T2000 systems that featured support for on-chip RSA encryption directly in the UltraSPARC T1 processor.  Successive generations have built on this approach by support for additional encryption ciphers that are tightly coupled with the Oracle Solaris 10 and Solaris 11 encryption framework.  While earlier versions of this technology were implemented using co-processors, the SPARC T4 was redesigned with new crypto instructions to eliminate some of the performance overhead associated with the former approach, resulting in much higher performance for encrypted workloads. The Superiority of the SPARC T4 Approach to Crypto As companies continue to engage in more and more e-commerce, the need to provide greater degrees of security for these transactions is more critical than ever before.  Traditional methods of securing data in transit by applications have a number of drawbacks that are addressed by the SPARC T4 cryptographic approach. 1. Performance degradation – cryptography is highly compute intensive and therefore, there is a significant cost when using other architectures without embedded crypto functionality.  This performance penalty impacts the entire system, slowing down performance of web servers (SSL), for example, and potentially bogging down the speed of other business applications.  The SPARC T4 processor enables customers to deliver high levels of security to internal and external customers while not incurring an impact to overall SLAs in their IT environment. 2. Added cost – one of the methods to avoid performance degradation is the addition of add-in cryptographic accelerator cards or external offload engines in other systems.  While these solutions provide a brute force mechanism to avoid the problem of slower system performance, it usually comes at an added cost.  Customers looking to encrypt datacenter traffic without the overhead and expenditure of extra hardware can rely on SPARC T4 systems to deliver the performance necessary without the need to purchase other hardware or add-on cards. 3. Higher complexity – the addition of cryptographic cards or leveraging load balancers to perform encryption tasks results in added complexity from a management standpoint.  With SPARC T4, encryption keys and the framework built into Solaris 10 and 11 means that administrators generally don’t need to spend extra cycles determining how to perform cryptographic functions.  In fact, many of the instructions are built-in and require no user intervention to be utilized.  For example, For OpenSSL on Solaris 11, SPARC T4 crypto is available directly with a new built-in OpenSSL 1.0 engine, called the "t4 engine."  For a deeper technical dive into the new instructions included in SPARC T4, consult Dan Anderson’s blog. Conclusion In summary, SPARC T4 systems offer customers much more value for applications than just increased performance. The integration of key virtualization technologies, embedded encryption, and a true Enterprise Operating System, Oracle Solaris, provides direct business benefits that supersedes the commodity approach to data center computing.   SPARC T4 removes the roadblocks to secure computing by offering integrated crypto accelerators that can save IT organizations in operating cost while delivering higher levels of performance and meeting objectives around compliance. For more on the SPARC T4 family of products, go to here.

    Read the article

  • User Produtivity Kit - Powerful Packages (Part 1)

    - by [email protected]
    User Productivity Kit provides the ability to create a variety of content types including robust topics on system process and web pages with formatted text and graphics. There are times when you want to enhance content with media types not naively created by User Productivity Kit, media types such as video, custom animations, forms, and more. One method of doing this is to maintain these media files on a web server - separate from the User Productivity Kit player content and link to the files using absolute URLs such as http://myserver/overview.html. While this will get you going, you won't benefit from the content management capabilities of the UPK Developer. Features such as check-in / check-out, history, document properties, folder permissions and more are not available to this external content. Further, if you ever need to move that content to a server with a different name or domain, you'd need to update all your links. UPK version 3.1 introduced a new document type - the package. A package is a group of folders and files that you manage in the Developer library as a single document. These package documents work in the same manner as any other document in the library and you can use all of the collaborative content development features you see with other document types. Packages can be used for anything from single Word documents, PDF files, and graphics to more intricate sets of inter-related files commonly seen with HTML files and their graphics, style sheets, and JavaScript files. The structure of the files and folders within a package will always be preserved so this means that any relative links between files in the package will work. For example, an HTML file containing an image tag with a relative link to a graphic elsewhere in the same package will continue to function properly both when viewed in the Developer and when published to outputs such as the UPK Player. Once you start to use packages, you'll soon discover that there is a lot of existing content that can be re-purposed by placing it into UPK packages. Packages are easily created by selecting File...New...Package. Files can be added in a number of ways including the "Add Files" button, copy & paste from Windows Explorer, and drag & drop. To use one of the files in the package, just create a link to the file in the package you want to target. This is supported throughout the Developer in places such as section & topic concepts, frame links and hyperlinks in web pages. A little more challenging is determining how to structure packages in your library. As I mentioned earlier, a package can contain anything from a single file to dozens of files and folders. So what should you do? You could create a package for each file. You could create one package for all your files. But which one is right? Well, there's not a right and wrong answer to this question. There are advantages and disadvantages to each. The right decision will be influenced by the package files themselves, the structure of the content in the library, the size and working style of the development team, how content is shared between different outlines and more. The first consideration can be assessed the quickest. If the content to be placed in the package is composed of multiple files and those files reference each other, they should be in the same package. There are loads of examples of this type of content. HTML files with graphics and style sheets, HTML files with embedded Flash movies, and Word documents saved as HTML are all examples where the content is composed of multiple files and the files reference each other in some way. Content like this should always be placed in a singe package such that these relative links between the files are preserved and play properly in the UPK Player. In upcoming posts, I'll explain additional considerations.

    Read the article

  • Hidden Features of C#?

    - by Serhat Özgel
    This came to my mind after I learned the following from this question: where T : struct We, C# developers, all know the basics of C#. I mean declarations, conditionals, loops, operators, etc. Some of us even mastered the stuff like Generics, anonymous types, lambdas, linq, ... But what are the most hidden features or tricks of C# that even C# fans, addicts, experts barely know? Here are the revealed features so far: Keywords yield by Michael Stum var by Michael Stum using() statement by kokos readonly by kokos as by Mike Stone as / is by Ed Swangren as / is (improved) by Rocketpants default by deathofrats global:: by pzycoman using() blocks by AlexCuse volatile by Jakub Šturc extern alias by Jakub Šturc Attributes DefaultValueAttribute by Michael Stum ObsoleteAttribute by DannySmurf DebuggerDisplayAttribute by Stu DebuggerBrowsable and DebuggerStepThrough by bdukes ThreadStaticAttribute by marxidad FlagsAttribute by Martin Clarke ConditionalAttribute by AndrewBurns Syntax ?? operator by kokos number flaggings by Nick Berardi where T:new by Lars Mæhlum implicit generics by Keith one-parameter lambdas by Keith auto properties by Keith namespace aliases by Keith verbatim string literals with @ by Patrick enum values by lfoust @variablenames by marxidad event operators by marxidad format string brackets by Portman property accessor accessibility modifiers by xanadont ternary operator (?:) by JasonS checked and unchecked operators by Binoj Antony implicit and explicit operators by Flory Language Features Nullable types by Brad Barker Currying by Brian Leahy anonymous types by Keith __makeref __reftype __refvalue by Judah Himango object initializers by lomaxx format strings by David in Dakota Extension Methods by marxidad partial methods by Jon Erickson preprocessor directives by John Asbeck DEBUG pre-processor directive by Robert Durgin operator overloading by SefBkn type inferrence by chakrit boolean operators taken to next level by Rob Gough pass value-type variable as interface without boxing by Roman Boiko programmatically determine declared variable type by Roman Boiko Static Constructors by Chris Easier-on-the-eyes / condensed ORM-mapping using LINQ by roosteronacid Visual Studio Features select block of text in editor by Himadri snippets by DannySmurf Framework TransactionScope by KiwiBastard DependantTransaction by KiwiBastard Nullable<T> by IainMH Mutex by Diago System.IO.Path by ageektrapped WeakReference by Juan Manuel Methods and Properties String.IsNullOrEmpty() method by KiwiBastard List.ForEach() method by KiwiBastard BeginInvoke(), EndInvoke() methods by Will Dean Nullable<T>.HasValue and Nullable<T>.Value properties by Rismo GetValueOrDefault method by John Sheehan Tips & Tricks nice method for event handlers by Andreas H.R. Nilsson uppercase comparisons by John access anonymous types without reflection by dp a quick way to lazily instantiate collection properties by Will JavaScript-like anonymous inline-functions by roosteronacid Other netmodules by kokos LINQBridge by Duncan Smart Parallel Extensions by Joel Coehoorn

    Read the article

  • What's the recommended implemenation for hashing OLE Variants?

    - by Barry Kelly
    OLE Variants, as used by older versions of Visual Basic and pervasively in COM Automation, can store lots of different types: basic types like integers and floats, more complicated types like strings and arrays, and all the way up to IDispatch implementations and pointers in the form of ByRef variants. Variants are also weakly typed: they convert the value to another type without warning depending on which operator you apply and what the current types are of the values passed to the operator. For example, comparing two variants, one containing the integer 1 and another containing the string "1", for equality will return True. So assuming that I'm working with variants at the underlying data level (e.g. VARIANT in C++ or TVarData in Delphi - i.e. the big union of different possible values), how should I hash variants consistently so that they obey the right rules? Rules: Variants that hash unequally should compare as unequal, both in sorting and direct equality Variants that compare as equal for both sorting and direct equality should hash as equal It's OK if I have to use different sorting and direct comparison rules in order to make the hashing fit. The way I'm currently working is I'm normalizing the variants to strings (if they fit), and treating them as strings, otherwise I'm working with the variant data as if it was an opaque blob, and hashing and comparing its raw bytes. That has some limitations, of course: numbers 1..10 sort as [1, 10, 2, ... 9] etc. This is mildly annoying, but it is consistent and it is very little work. However, I do wonder if there is an accepted practice for this problem.

    Read the article

  • C# Lack of Static Inheritance - What Should I Do?

    - by yellowblood
    Alright, so as you probably know, static inheritance is impossible in C#. I understand that, however I'm stuck with the development of my program. I will try to make it as simple as possible. Lets say our code needs to manage objects that are presenting aircrafts in some airport. The requirements are as follows: There are members and methods that are shared for all aircrafts There are many types of aircrafts, each type may have its own extra methods and members. There can be many instances for each aircraft type. Every aircraft type must have a friendly name for this type, and more details about this type. For example a class named F16 will have a static member FriendlyName with the value of "Lockheed Martin F-16 Fighting Falcon". Other programmers should be able to add more aircrafts, although they must be enforced to create the same static details about the types of the aircrafts. In some GUI, there should be a way to let the user see the list of available types (with the details such as FriendlyName) and add or remove instances of the aircrafts, saved, lets say, to some XML file. So, basically, if I could enforce inherited classes to implement static members and methods, I would enforce the aircraft types to have static members such as FriendlyName. Sadly I cannot do that. So, what would be the best design for this scenario?

    Read the article

  • Invalid Cast Exception ASP.NET C#

    - by Shadow Scorpion
    I have a problem in this code: public static T[] GetExtras <T>(Type[] Types) { List<T> Res = new List<T>(); foreach (object Current in GetExtras(typeof(T), Types)) { Res.Add((T)Current);//this is the error } return Res.ToArray(); } public static object[] GetExtras(Type ExtraType, Type[] Types) { lock (ExtraType) { if (!ExtraType.IsInterface) return new object[] { }; List<object> Res = new List<object>(); bool found = false; found = (ExtraType == typeof(IExtra)); foreach (Type CurInterFace in ExtraType.GetInterfaces()) { if (found = (CurInterFace == typeof(IExtra))) break; } if (!found) return new object[] { }; foreach (Type CurType in Types) { found = false; if (!CurType.IsClass) continue; foreach (Type CurInterface in CurType.GetInterfaces()) { try { if (found = (CurInterface.FullName == ExtraType.FullName)) break; } catch { } } try { if (found) Res.Add(Activator.CreateInstance(CurType)); } catch { } } return Res.ToArray(); } } When I'm using this code in windows application it works! But I cant use it on ASP page. Why?

    Read the article

  • Sort List C# in arbitrary order

    - by Jasper
    I have a C# List I.E. List<Food> x = new List<Food> () ; This list is populated with this class Public class Food { public string id { get; set; } public string idUser { get; set; } public string idType { get; set; } //idType could be Fruit , Meat , Vegetable , Candy public string location { get; set; } } Now i have this unsorted List<Food> list ; which has I.E. 15 elements. There are 8 Vegetable Types , 3 Fruit Types , 1 Meat Types , 1 Candy Types I would sort this so that to have a list ordered in this way : 1° : Food.idType Fruit 2° : Food.idType Vegetables 3° : Food.idType Meat 4° : Food.idType Candy 5° : Food.idType Fruit 6° : Food.idType Vegetables 7° : Food.idType Fruit //Becouse there isnt more Meat so i insert the //next one which is Candy but also this type is empty //so i start from begin : Fruit 8° : Food.idType Vegetables 9° : Food.idType Vegetables // For the same reason of 7° 10 ° Food.idType Vegetables ...... .... .... 15 : Food.idType Vegetables I cant find a rule to do this. Is there a linq or List.Sort instruction which help me to order the list in this way? Update i changed the return value of idType and now return int type instead string so 1=Vegetable , 2=Fruit , 3=Candy 4=Meat

    Read the article

  • Sharepoint: Integrity of lookup fields after a list import

    - by driAn
    Hi there I got a question about the behavior of lookup fields when importing data. I wonder how the lookup fields behave when the list they point to is being replaced/imported. To explain the issue, I will provide a quick example below: As example, assume we have these two sharepoint lists: Product Types ------------- + Type Name + Code Nr + etc Products -------- + Product Name + Product Type (Lookup field to list "Product Types") + etc In my scenario, the Products List contains production data on the production Sharepoint platform. It is filled with data by the business users. However the Product Types list contains rather static data and is maintained by the developer. Now after a development cycle, the developer wants to deploy his new webparts and his new data (product types list). The developer performs the following procedure: On the dev machine: Export "product type" list using stsadm On the production machine: Delete all items in the "product type" list On the production machine: Import the "product type" list using stsadm This means we basically replace the "product type" list on the production server while keeping the "product" list as it is. Now the question: Is this safe? Will the lookup references break under certain circumstances? Any downside of this import/export procedure? What happens if someone accesses a "product" during the import? Will the (now invalid) reference clear its own content (become a null value). What happens if the schema of the "product type" list changes (new column)? Will this cause any troubles? Thanks for all feedback and suggestions!

    Read the article

  • How to transfer objects through the header in WCF

    - by Michael
    I'm trying to transfer some user information in the header of the message through message inspectors. I have created a behavior which adds the inspector to the service (both client and server). But when I try to communicate with the service I get the following error: XmlException: Name cannot begin with the '<' character, hexadecimal value 0x3C. I have also get exception telling me that DataContracts where unexpected. Type 'System.DelegateSerializationHolder+DelegateEntry' with data contract name 'DelegateSerializationHolder.DelegateEntry:http://schemas.datacontract.org/2004/07/System' is not expected. Consider using a DataContractResolver or add any types not known statically to the list of known types - for example, by using the KnownTypeAttribute attribute or by adding them to the list of known types passed to DataContractSerializer. The thing is that my object contains other objects which are marked as DataContract and I'm not interested adding the KnownType attribute for those types. Another problem might be that my object to serialize is very restricted in form of internal class and internal properties etc. Can anyone guide me in the right direction. What I'm I doing wrong? Some code: public virtual object BeforeSendRequest(ref Message request, IClientChannel channel) { var header = MessageHeader.CreateHeader("<name>", "<namespace>", object); request.Headers.Add(header); return Guid.NewGuid(); }

    Read the article

  • Adding a dynamic image to UITable View Cell

    - by Graeme
    Hi, I have a table in which I want a dynamic image to load in at the left-hand side. The table needs to use an IF statement to select the appropriate image from the "Resources" folder, and needs to be based upon [dog types]. The [dog types] is extracted from an RSS feed, and so the image in the table cell needs to match the each cell's [dog types] tag. Update: See code below for what I'm looking to do (only below it's for earthquakes, for me its pictures of dogs). I suspect I need to use - (UIImage *)imageForTypes:(NSString *)types { to do such a thing. Thanks. Updated code: This is for Apple's Earthquake sample but does exactly what I need to do. It matches images to the severity (2.0, 3.0, 4.0, 5.0 etc.) of every earthquake to the magnitude. - (UIImage *)imageForMagnitude:(CGFloat)magnitude { if (magnitude >= 5.0) { return [UIImage imageNamed:@"5.0.png"]; } if (magnitude >= 4.0) { return [UIImage imageNamed:@"4.0.png"]; } if (magnitude >= 3.0) { return [UIImage imageNamed:@"3.0.png"]; } if (magnitude >= 2.0) { return [UIImage imageNamed:@"2.0.png"]; } return nil; } and under - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath magnitudeImage.image = [self imageForMagnitude:earthquake.magnitude];

    Read the article

  • Is there a programming toolkit for converting "any file type" to a TIFF image?

    - by Ryan
    Hello, I've written several variations of a program. The purpose of the program is to convert "any file type" to a TIFF image represenation of that file, as if it were being printed using a printer. I'm currently using a third party printer driver that I send files to, and it outputs a TIFF image. This is nice, but it requires me to use Office Interop files, and interact with each individual processing application in order to print the files. I had previously tried a toolkit, called Apose .NET, which did not rely on Office Interop, and did not require any printer driver. It did the conversion all on its own and would create a TIFF image. The problem with Aspose .NET was that it did not support a wide variety of input file types. Most notably, it can't do Visio files. My project calls for the ability to create a TIFF image for virtually "any file type". (excluding exes, music files, and stuff) I know that finding something that handles literally any file type is probably not a very feasible task, so I figure if it can at least handle all the Office file types, Adobe types, and other major standard file types, then I can write a custom extension parsing module that uses those processing applications to do the printing of any file type that can be viewed using those applications. So, does anyone know of a toolkit that can do this? Preferably one that does not rely on Office or a printer driver. It does not have to be free, or open source. Or, if you know of an amazing printer driver that does this, I'm open to that too. Thanks in advance, Ryan

    Read the article

  • Invalid Cast Exception in ASP.NET but not in WinForms

    - by Shadow Scorpion
    I have a problem in this code: public static T[] GetExtras <T>(Type[] Types) { List<T> Res = new List<T>(); foreach (object Current in GetExtras(typeof(T), Types)) { Res.Add((T)Current);//this is the error } return Res.ToArray(); } public static object[] GetExtras(Type ExtraType, Type[] Types) { lock (ExtraType) { if (!ExtraType.IsInterface) return new object[] { }; List<object> Res = new List<object>(); bool found = false; found = (ExtraType == typeof(IExtra)); foreach (Type CurInterFace in ExtraType.GetInterfaces()) { if (found = (CurInterFace == typeof(IExtra))) break; } if (!found) return new object[] { }; foreach (Type CurType in Types) { found = false; if (!CurType.IsClass) continue; foreach (Type CurInterface in CurType.GetInterfaces()) { try { if (found = (CurInterface.FullName == ExtraType.FullName)) break; } catch { } } try { if (found) Res.Add(Activator.CreateInstance(CurType)); } catch { } } return Res.ToArray(); } } When I'm using this code in windows application it works! But I cant use it on ASP page. Why?

    Read the article

  • WPF Sizing of Panels

    - by Mystagogue
    I'm looking for an article or overview of WPF Panel types, that explains the sizing characters of each. For example, here are the panel types: http://msdn.microsoft.com/en-us/library/ms754152.aspx#Panels_derived_elements I've learned (by experiment) that UniformGrid can be given a fixed height, or it can be "auto" where it expands to fit available space. That is great, but what I wanted was for Uniform Grid to shrink to fit its internal content (particularly content that is provided dynamically, at run-time). I don't think it has that ability. So I'd like to know either what other Panel I could use for that purpose, or what Panel I should nest the UniformGrid inside of. But I don't just want an answer to that specific question. I want the sizing dynamics and capabilities of all the Panel types, in summary form, so I can make all these choices as needed. Online I find articles that cover only half the Panel types, and don't give as much information about sizing as I'm describing. Anyone know the link (or book) that has the info I'm seeking? p.s. Since I want the UniformGrid to shrink to the dynamic content I'm providing, I could just keep track of the total height of controls placed within, and then set the height of the UniformGrid. But it would be nice if WPF took care of this for me.

    Read the article

  • Polymorphic Numerics on .Net and In C#

    - by Bent Rasmussen
    It's a real shame that in .Net there is no polymorphism for numbers, i.e. no INumeric interface that unifies the different kinds of numerical types such as bool, byte, uint, int, etc. In the extreme one would like a complete package of abstract algebra types. Joe Duffy has an article about the issue: http://www.bluebytesoftware.com/blog/CommentView,guid,14b37ade-3110-4596-9d6e-bacdcd75baa8.aspx How would you express this in C#, in order to retrofit it, without having influence over .Net or C#? I have one idea that involves first defining one or more abstract types (interfaces such as INumeric - or more abstract than that) and then defining structs that implement these and wrap types such as int while providing operations that return the new type (e.g. Integer32 : INumeric; where addition would be defined as public Integer32 Add(Integer32 other) { return Return(Value + other.Value); } I am somewhat afraid of the execution speed of this code but at least it is abstract. No operator overloading goodness... Any other ideas? .Net doesn't look like a viable long-term platform if it cannot have this kind of abstraction I think - and be efficient about it. Abstraction is reuse.

    Read the article

  • Is it valid to use unsafe struct * as an opaque type instead of IntPtr in .NET Platform Invoke?

    - by David Jeske
    .NET Platform Invoke advocates declaring pointer types as IntPtr. For example, the following [DllImport("user32.dll")] static extern IntPtr SendMessage(IntPtr hWnd, UInt32 Msg, Int32 wParam, Int32 lParam); However, I find when interfacing with interesting native interfaces, that have many pointer types, flattening everything into IntPtr makes the code very hard to read and removes the typical typechecking that a compiler can do. I've been using a pattern where I declare an unsafe struct to be an opaque pointer type. I can store this pointer type in a managed object, and the compiler can typecheck it form me. For example: class Foo { unsafe struct FOO {}; // opaque type unsafe FOO *my_foo; class if { [DllImport("mydll")] extern static unsafe FOO* get_foo(); [DllImport("mydll")] extern static unsafe void do_something_foo(FOO *foo); } public unsafe Foo() { this.my_foo = if.get_foo(); } public unsafe do_something_foo() { if.do_something_foo(this.my_foo); } While this example may not seem different than using IntPtr, when there are several pointer types moving between managed and native code, using these opaque pointer types for typechecking is a godsend. I have not run into any trouble using this technique in practice. However, I also have not seen an examples of anyone using this technique, and I wonder why. Is there any reason that the above code is invalid in the eyes of the .NET runtime? My main question is about how the .NET GC system treats "unsafe FOO *my_foo". Is this pointer something the GC system is going to try to trace, or is it simply going to ignore it? My hope is that because the underlying type is a struct, and it's declared unsafe, that the GC would ignore it. However, I don't know for sure. Thoughts?

    Read the article

  • Python import error: Symbol not found, but the symbol <s>is</s> *is not* present in the file

    - by Autopulated
    I get this error when I try to import ssrc.spread: ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/ssrc/_spread.so, 2): Symbol not found: __ZN17ssrcspread_v1_0_67Mailbox11ZeroTimeoutE The file in question (_spread.so) includes the symbol: $ nm _spread.so | grep _ZN17ssrcspread_v1_0_67Mailbox11ZeroTimeoutE U __ZN17ssrcspread_v1_0_67Mailbox11ZeroTimeoutE U __ZN17ssrcspread_v1_0_67Mailbox11ZeroTimeoutE (twice because the file is a fat ppc/x86 binary) EDIT: okay, as James points out, the U means that the symbol is undefined but required by the object file. With some more digging I've noticed (where I should have looked first...) these linker errors during compilation: CC=g++ CXX=g++ g++-4.0 -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -O3 -I../.. -I../.. -I/usr/local/include -I/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6 -O2 -I/usr/local/include -std=c++98 -pipe -fno-gnu-keywords -fvisibility-inlines-hidden -o SsrcSpread.o -c SsrcSpread.cc CC=g++ CXX=g++ /bin/sh ../../libtool --tag=CXX --mode=link g++-4.0 -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -bundle -undefined dynamic_lookup -F/Library/Frameworks -framework Python \ -pthread -D_REENTRANT -pedantic -Wall -Wno-long-long -Winline -Woverloaded-virtual -Wold-style-cast -Wsign-promo -L../../ssrc -lssrcspread -L/usr/local/lib -ltspread-core -o _spread.so SsrcSpread.o mkdir .libs g++-4.0 -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -bundle -undefined dynamic_lookup -F/Library/Frameworks -framework Python -pthread -D_REENTRANT -pedantic -Wall -Wno-long-long -Winline -Woverloaded-virtual -Wold-style-cast -Wsign-promo -o _spread.so SsrcSpread.o -Wl,-bind_at_load -L/Dev/libssrcspread-1.0.6/ssrc /Dev/libssrcspread-1.0.6/ssrc/.libs/libssrcspread.a -L/usr/local/lib -ltspread-core ld: warning: in ~/Dev/libssrcspread-1.0.6/ssrc/.libs/libssrcspread.a, file was built for unsupported file format which is not the architecture being linked (ppc) ld: warning: in /Developer/SDKs/MacOSX10.4u.sdk/usr/local/lib/libtspread-core.dylib, file was built for unsupported file format which is not the architecture being linked (ppc) ld: warning: in /Dev/libssrcspread-1.0.6/ssrc/.libs/libssrcspread.a, file was built for unsupported file format which is not the architecture being linked (i386) ld: warning: in /Developer/SDKs/MacOSX10.4u.sdk/usr/local/lib/libtspread-core.dylib, file was built for unsupported file format which is not the architecture being linked (i386) I'm also not entirely sure that the 10.4 sdk is the right one for compiling python modules (but switching to 10.6 didn't seem to help).

    Read the article

  • boost::lambda bind expressions can't get bind to string's empty() to work

    - by navigator
    Hi, I am trying to get the below code snippet to compile. But it fails with: error C2665: 'boost::lambda::function_adaptor::apply' : none of the 8 overloads could convert all the argument types. Sepcifying the return type when calling bind does not help. Any idea what I am doing wrong? Thanks. #include <boost/lambda/lambda.hpp> #include <boost/lambda/bind.hpp> #include <string> #include <map> int main() { namespace bl = boost::lambda; typedef std::map<int, std::string> types; types keys_and_values; keys_and_values[ 0 ] = "zero"; keys_and_values[ 1 ] = "one"; keys_and_values[ 2 ] = "Two"; std::for_each( keys_and_values.begin(), keys_and_values.end(), std::cout << bl::constant("Value empty?: ") << std::boolalpha << bl::bind(&std::string::empty, bl::bind(&types::value_type::second, _1)) << "\n"); return 0; }

    Read the article

  • database design suggestion needed

    - by JMSA
    I need to design a table for daily sales of pharmaceutical products. There are hundreds of types of products available {Name, code}. Thousands of sales-persons are employed to sell those products{name, code}. They collect products from different depots{name, code}. They work in different Areas - Zones - Markets - Outlets, etc. {All have names and codes} Each product has various types of prices {Production Price, Trade Price, Business Price, Discount Price, etc.}. And, sales-persons are free to choose from those combination to estimate the sales price. The problem is, daily sales requires huge amount of data-entry. Within couple of years there may be gigabytes of data (if not terabytes). If I need to show daily, weekly, monthly, quarterly and yearly sales reports there will be various types of sql queries I shall need. This is my initial design: Product {ID, Code, Name, IsActive} ProductXYZPriceHistory {ID, ProductID, Date, EffectDate, Price, IsCurrent} SalesPerson {ID, Code, Name, JoinDate, and so on..., IsActive} SalesPersonSalesAraeaHistory {ID, SalesPersonID, SalesAreaID, IsCurrent} Depot {ID, Code, Name, IsActive} Outlet {ID, Code, Name, AreaID, IsActive} AreaHierarchy {ID, Code, Name, PrentID, AreaLevel, IsActive} DailySales {ID, ProductID, SalesPersonID, OutletID, Date, PriceID, SalesPrice, Discount, etc...} Now, apart from indexing, how can I normalize my DailySales table to have a fine grained design that I shall not need to change for years to come? Please show me a sample design of only the DailySales data-entry table (from which all types of reports would be queried) on the basis of above information. I don't need a detailed design advice. I just need an advice regarding only the DailySales table. Is there any way to break this particular table to achieve granularity?

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >