Search Results

Search found 23797 results on 952 pages for 'css framework'.

Page 754/952 | < Previous Page | 750 751 752 753 754 755 756 757 758 759 760 761  | Next Page >

  • How do I upgrade my Crystal Report libraries in a .NET 3.5 project to CR XI R2?

    - by Stuart B
    Our project currently uses Crystal Reports for Visual Studio 2008. We need to upgrade to XI R2, but I'm having problems doing so. Here are the steps I followed: Install Crystal Reports XI R2. Collect updated assemblies from the GAC. I did this because I couldn't find version XI libraries in the "Add References..." dialog. I verified that these assemblies were of version 11.5.*. The libraries I gathered were: CrystalDecisions.CrystalReports.Engine CrystalDecisions.Enterprise.Framework CrystalDecisions.Enterprise.InfoStore CrystalDecisions.ReportSource CrystalDecisions.Shared CrystalDecisions.Windows.Forms Replace all references in my projects to version 10.5 Crystal libraries with references to the newer assemblies. Everything builds fine, but when I try to instantiate a ReportDocument, I get this error: The type initializer for 'CrystalDecisions.CrystalReports.Engine.ReportDocument' threw an exception. Is there anything I'm missing? Will this just not work?

    Read the article

  • How can I turn on DynamicCompression feature of IIS programmatically?

    - by LockeVN
    I'm making an installer program for my web application. My web application uses CSS and JS heavily, so I want to enable both Static and Dynamic HttpCompression for IIS7/7.5. It needs 2 steps: I can modified the web.config, put <httpcompression> tag, it's ok. DynamicContentCompression must be turned on in Windows Feature to make httpCompression work. Static HttpCompression is enable by default in IIS7 and IIS7.5, but Dynamic HttpCompression is not enable by default (although it's available). I can do manually by turn on: Start/ControlPanel/ProgramsAndFeatures/TurnWindowsFeatures on or Off/IIS/WWW Service/Performance features/Dynamic Content Compression, but How can I programmatically turn it on that Windows Feature? I can use PowerShell, C# in my installer. Any idea how I might be able to do this? Thanks.

    Read the article

  • Reading Local Group Policy / Active Directory Settings

    - by Shinobi
    I'm writing a C# program that will enforce password complexity in accordance with the Windows Group Policy setting "Password must meet complexity requirements". Specifically, if that policy is set to Enabled either on the local machine (if it's not part of a domain) or by the Domain Security Policy (for domain members), then my software needs to enforce a complex password for its own internal security. The issue is that I can't figure out how to read that GPO setting. Google searches have indicated that I can read GPO settings with one of these two APIs: the System.DirectoryServices library in .NET Framework, and Windows Management Instrumentation (WMI), but I haven't had any success so far. Any insights would be helpful.

    Read the article

  • Yii include PHP Excel

    - by Anton Sementsov
    Im trying include PHPExcel lib to Yii, put PHPExcel.php in root of extensions, near PHPExcel folder and added that code into config/main.php // application components 'components'=>array( 'excel'=>array( 'class'=>'application.extensions.PHPExcel', ), modify /protected/extensions/PHPExcel/Autoloader.php public static function Register() { $functions = spl_autoload_functions(); foreach($functions as $function) spl_autoload_unregister($function); $functions=array_merge(array(array('PHPExcel_Autoloader', 'Load')), $functions); foreach($functions as $function) $x = spl_autoload_register($function); return $x; }// function Register() Then, trying create PHPExcel object $objPHPExcel = new PHPExcel(); but have an error: include(PHPExcel.php) [<a href='function.include'>function.include</a>]: failed to open stream: No such file or directory in Z:\home\yii.local\www\framework\YiiBase.php(418)

    Read the article

  • Spring roo Vs (Wicket and Spring)

    - by Ketan Khairnar
    Spring roo is new framework and I found it very interesting. I have been working on web application for last 3-4 years and Always found JSPs are hard to maintain across teams if everyone is not disciplined enough about separation of markup and serverside logic. I have used JackBe/BackBase in last projects and I enjoyed xml templates working as views. This was much better than JSPs. But I couldnt automate webtests through selenium for backbase. I would be surely using Spring MVC (-view), Hibernate on the backend. I found Wicket as good alternative. Have you used wicket along with Spring and what was your experience?

    Read the article

  • JSF Conditional formatting for onmouseover and such attributes.

    - by Ben
    Hi, I'm trying to format a panelgrid according to a value in the backing bean. I'm currently trying this as the value of the onmouseover attribute: this.className=#{(actions.currentlySelectedActionButton == 0)?'actionButton actionButtonChosen':'actionButton'}; whereas the CSS looks like this: (the relevant parts): .actionButton { width: 100%; height: 20px; border: thin solid #000; cursor:default; } .actionButtonChosen { background-color: blue; } It's not working. Anyone spotting the error would help me greatly. Thanks!

    Read the article

  • Upgrading Visual Studio 2010 RC to RTM

    - by Brant Bobby
    I have the RC build of VS2010 installed on my computer. Now that the RTM build is out, I want to upgrade. Aside from the main Visual Studio package and .NET Framework 4, what else should I remove before I install the RTM build in order to minimize potential breakage/conflicts? VS2010 installs a whole bunch of ancillary packages and I'm not sure which ones have been upgraded between RC and RTM. (Extra credit: I've got another machine that is still running Beta 2. Would the procedure be the same?)

    Read the article

  • Cross domain cookie reading/setting cross browsers

    - by Rac123
    I know there are already a few threads available here on this subject but I want others' opinion on this. There are two ways to set/read the cross domain cookies: Creating IFrame on A.com pointing to a page on B.com which creates the cookie and pass that information by creating another IFrame on B.com side pointing to A.com, either using window.name or in location.href.hash A.com page makes a XHR/JSONP call to B.com web service/page that has the following headers and it also sets up the cookie and returns the value. AddHeader("p3p", "CP=\"IDC DSP COR ADM DEVi TAIi PSA PSD IVAi IVDi CONi HIS OUR IND CNT\"") As we don't have postMessage available across all the browsers, I believe we have to go with one of the cases mentioned above. My question is which is a better way (cleaner) and why to implement for cross browser. Using any other JS framework is out of scope of this discussion. If there's another better way, please mention here! Thank you for your intelligent input in advance! :)

    Read the article

  • ASP.NET MVC 2.0 RTM cannot work with VWD 2008 Express on a new Windows 7 Pro

    - by silent
    The MVC 2.0 RTM works great on my old Vista computer with VWD 2008 Express, but I just bought a new computer with Windows 7 Pro, I installed VWD 2008 Express SP1 and MVC 2.0 RTM by using Web PI 2.0. but after installation, I found the VWD doesn't have any MVC options, that means I can't either create new MVC projects or compile existing MVC projects. Why? What other steps I need to do to make it work? I'm sure the MVC has been installed properly since my MVC site on the new computer works well (so the IIS side has no problem), just the VWD can't 'realize' that the MVC framework is already installed... (tried to uninstall and install many times, and I also tried to install MVC separately without Web PI, but it just won't work)

    Read the article

  • C# .Net file in use issue

    - by Dan
    I'm having an issue opening files that have recently been closed by the .Net framework. Basically, what happens is the following: -Read in an XML file using DataSet.ReadXml() -Make some changes to the data -Write out the XML file using DataSet.WriteXml() -Copy the XML file to a new location using File.Copy -FTP the file using a custom control This sequence can intermittently fail either after the WriteXML or the File.Copy with a file in use exception. I'm guessing it could be the Windows write cache not flushing right away. Can anyone confirm that this could be causing my issue? Any solutions to suggest? Thanks, Dan

    Read the article

  • How to do logout functionality in C# window application?

    - by Shailesh Jaiswal
    I am developng Smart device application in C#. It is a window application. In this application I am using Login form to authenticate the users. Only authenticated users can login into the system. I using statc variables in this application so that they can be used at application level. After deploying the application I can see that emulator provides the close button with multiplicaton symbol. In this way I can close my form as well as application. But I want to provde one logout link in my application. Can I provide logout functionality in C# window applicain ? If yes, how to do that ? Please make sure that all the functions of .net framework does not work with .net compact framwork? Can you provide me the code or link through which can resolve the above issue ?

    Read the article

  • windbg and symbols

    - by CaseyJones
    When I set a breakpoint on one of the methods that appears on top of the stack (!CLRStack), I get lots of these messages for every DLL that the debuggee is referencing including the .NET Framework ones. ERROR: Module load completed but symbols could not be loaded Further digging into this shows that windbg is not loading every .pdb file that I make available in the symbols path. I've double-checked my symbol's path and it looks OK, but the following commands clearly show that not all PDBs are loaded correctly! 0:000 !sym noisy noisy mode - symbol prompts on 0:000 .reload Reloading current modules ................................................................ DBGHELP: ntdll - public symbols c:\symbols\ntdll.pdb\6992F4DAF4B144068D78669D6CB5D2072\ntdll.pdb .. 0:000 .sympath Symbol search path is: SRV*c:\symbols*C:\xc Expanded Symbol search path is: srv*c:\symbols*c:\xc I've c:\symbols being used for the cache and c:\xc being used for the .NET app PDBs that WinDBG seems unable to find. Any idea how I can use to help further troubleshoot this? Thanks

    Read the article

  • Change table row display property

    - by Idsa
    I have an html page with a table that contains a hidden row: <table> <tr id="hiddenTr" style="display:none"> </tr> </table> I need to make it visible at client side using jquery. I tried this $('#hiddenTr').show(); and this $('#hiddenTr').css('display', 'table-row'); Both implementations don't work for me. Furthemore the second one is not crossbrowser.

    Read the article

  • Using a DataSet instead of custom business entities in soa and n-tier architecture

    - by kathy
    I’m working on a large and a high volume transactional enterprise application which has been designed using n-tire application architecture .And it was developed in the .NET platform utilizing C#,VB.NEt, Framework 3.5, ObjectDataSources, DataSet, WCF, asp.net update panel, JavaScript ,JSON, 3rd Party tools. The application is supposed to accomplish a really scalable / easily maintained / robust application / integrations, and to make sure that my services are created using a format that can be understood by other systems. The problem is, this application is about 70% complete but now I was wondering if the following would cause us future issues, I’m using a DataSet and a DataTable to (get /set) the data (form /to) the stored procedure in the database using the ObjectDataSources and was wondering if this would prevent my application from achieving the above goals. Actually, I am not anti-OO. I write lots of classes for different purposes, but I didn’t use the entity objects(custom business entities) instead of the previous way because I have a large database that may contain 50 tables and I was just afraid to create entities for each table and then in the future if I need to change the schema of the database, it might cause a huge affect on the application ?

    Read the article

  • I can't build C# class library by MonoDevelop on Mac OS X

    - by wataradio
    When building following simple C# class library, using System; namespace MyProject { public class MyClass { public MyClass () { } } } I encountered following error message: /Library/Frameworks/Mono.framework/Versions/2.6.4/lib/mono/2.0/Microsoft.Common.targets: Error: You must specify DestinationFolder or DestinationFiles attribute. at Microsoft.Build.Tasks.Copy.Execute () [0x00000] in <filename unknown>:0 Anyone having the same problem? Somethig I tried: This error is solved if I change my project file format "MSBuild (Visual Studio 2008)" to "MonoDevelop 1.0" (Preferences Load/Save Project file format to use when creating new projects) There is no problem when building console app project. Only library project is the problem. There is no problem on Ubuntu and SUSE My Environments: MonoDevelop 2.2.2 Mono 2.6.4 Mac OS X 10.6.3

    Read the article

  • Why is MonoDevelop compiling with csc.exe?

    - by korchev
    I am trying to use MonoDevelop (2.4 beta 1) on Windows (7 x64) in order to test a .NET application on Mono (2.6.4). For some reason MonoDevelop is not using the Mono tool chain to build the application. It compiles it with the Microsoft tool chain - C:\Windows\Microsoft.NET\Framework\v3.5\csc.exe. The project I am trying to build is a simple ASP.NET MVC application generated from the "New ASP.NET MVC application" template. The "Runtime Version" dropdown in Project \Options-Build-General shows "MONO/.NET 35". What gives? Is there a way to change the .NET tool chain?

    Read the article

  • What's the shebang in Facebook URLs for?

    - by BoltClock
    I've just noticed that the long, convoluted Facebook URLs that we're used to now look like this: http://www.facebook.com/example.profile#!/pages/Some-Other-Page/123456789012345 As far as I can recall, earlier this year it was just a normal URL-fragment-like string (starting with #), without the exclamation mark. But now it's a shebang (#!), which I've previously only seen in shell scripts and Perl scripts. Does #! now play some special role in URLs, like for a certain Ajax framework or something since Facebook's interface is now largely Ajaxified? Or is it for some other purpose?

    Read the article

  • emails not going out to all users

    - by user156814
    I have a few scripts that send out emails to my users, and for some reason not all users are getting the email. The site is not live yet, so its no big deal yet but I dont understand why. I have set up a few fake accounts, one with my school email, one with hotmail and one with yahoo. When I sign up with my school email I recieve the welcome email, but I get nothing with the other email accounts. The same thing with my 'forgot password' email. Only my school email works, yahoo and hotmail arent working... I'm running on a Linux server with Apache. Using PHP and the kohana framework 2.3.4 Thanks.

    Read the article

  • Inclusion Handling in MVC 2 / MVCContrib

    - by mnemosyn
    I'd like to improve my page by combining and minifying javascript and CSS files. Since MVCContrib already contains a project called IncludeHandling, I took a look at that which unfortunately left me with unanswered questions: There is quite a set of interfaces and objects involved in the process. Now I'm using Ninject.Mvc, but it seems that MvcContrib.IncludeHandling is using some additional (home-brewed?) DI? Can I work around this? Has anybody used this and can share some experiences? Secondly, advice that is often heard is to put static content on different domains so the request does not contain cookies and the like, making it much easier for the server to handle the request. But how can I combine this with automatic inclusion handling - isn't that necessarily served in the same application? EDIT: Figured that there is really just a single resolve call in the whole thing, i really wonder why they use DI for that... Thinking about a fork there...

    Read the article

  • Getting started with Windows Azure

    - by jonhobbs
    Hi All, I'm going to sound like a complete newbie here but here goes... I've just signed up for a Windows Azure account and was hoping to get a simple hello world aspx page up and running in a browser to see how it all works but I can't seem to find a simple guide to getting a very simple web application running. I've got as far as setting up a "service" and going onto the "deploy" page but it's asking me upload an "application package". I've looked on MSDN but there aren't any simple guides, just reams of documentation talking about "roles" and "development fabric". For somebody that is proficient in HTML/CSS and knows a little abit about asp.net it may just as well be in another language. So, does anybody know how to upload a simple aspx page and then access it in a browser? Jon

    Read the article

  • MFC SDI Application without a default "New Document" on Startup

    - by Jd
    My application is an SDI with multiple views. By default, it creates a new document when the application starts. I want to modify this behavior so that a new document is created only when user explicitly clicks on "New". Or at least mimic this behavior. Any ideas? I am using Visual Studio 2008 with MFC feature pack. I googled and found some solution to this problem in an old MS Journal article. But unfortunately it doesn't seem to work now. Any workarounds or solutions? In short, I need to differentiate between framework call to OnFileNew() and User Click on New.

    Read the article

  • ASP.Net security using Operations Based Security

    - by Josh
    All the security stuff I have worked with in the past in ASP.Net for the most part has been role based. This is easy enough to implement and ASP.Net is geared for this type of security model. However, I am looking for something a little more fine grained than simple role based security. Essentially I want to be able to write code like this: if(SecurityService.CanPerformOperation("SomeUpdateOperation")){ // perform some update logic here } I would also need row level security access like this: if(SecurityService.CanPerformOperation("SomeViewOperation", SomeEntityIdentifier)){ // Allow user to see specific data } Again, fine grained access control. Is there anything like this already built? Some framework that I can drop into ASP.Net and start using, or am I going to have to build this myself?

    Read the article

  • Looking for evolutionary music example code

    - by Dan Dyer
    I would like to implement an interactive evolutionary algorithm for generating music (probably just simple melodies to start with). I'd like to use JFugue for this. Its website claims that it is well-suited to evolutionary music, but I can't find any evolutionary examples. I already have a framework to provide the evolutonary machinery. What I am looking for is some simple, working code that demonstrates viable approaches for the musical part (e.g. suitable encodings and evolutionary operators for the evolved tunes). I have some ideas how it might be achieved, but I'm not particularly knowledgeable about music theory, so to start with I'd like to just reimplement something that is known to work. So does anybody have, or know of, any freely available code (any language is fine) that demonstrates one or more approaches to evolutionary music? EDIT: I'm specifically looking for evolutionary code rather than other techniques that could be used for music synthesis.

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • Optimizing website - minification, sprites, etc...

    - by nivlam
    I'm looking at the product Aptimize Website Accelerator, which is an ISAPI filter that will concatenate files, minify css/javascript, and more. Does anyone have experience with this product, or any other "all-in-one" solutions? I'm interesting in knowing whether something like this would be good long-term, or would manually setting up all the components (integrate YUICompress into the build process, setting up gzip compression, tweaking expiration headers, etc...) be more beneficial? An all-in-one solution like this looks very tempting, as it could save a lot of time if our website is "less than optimal". But how efficient are these products? Would setting up the components manually generate better results? Or would the gap between the all-in-one solution and manually setting up the component be so small, that it's negligible?

    Read the article

< Previous Page | 750 751 752 753 754 755 756 757 758 759 760 761  | Next Page >