如何在Q3Dcover里打开图片种子怎么打开?

WSH Cover专辑面板显示的图片如何设置_foobar2000吧_百度贴吧
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&签到排名:今日本吧第个签到,本吧因你更精彩,明天继续来努力!
本吧签到人数:0成为超级会员,使用一键签到本月漏签0次!成为超级会员,赠送8张补签卡连续签到:天&&累计签到:天超级会员单次开通12个月以上,赠送连续签到卡3张
关注:50,059贴子:
WSH Cover专辑面板显示的图片如何设置收藏
看了帮助文档没有找到需要的提示。需求是这样的,我想让每一首歌曲播放时显示的专辑封面都是歌曲中的内嵌图片,而不是优先显示歌曲同文件夹下的某张*.jpg图片,这个功能如何实现呢?
看的不够仔细?
我觉得就是修改此属性,看帮助文档中写的是:====================================◆ Format附加图片来源路径。该选项内可以使用通配符,可用标题格式化,可以有多路径;多路径用“||”分隔;路径中你可以使用“%foobar_path%”来获取foobar2000程序所在的目录,返回值示例:“C:\Program files\foobar2000”;面板对此参数会先计算标题格式化,再检索“||”的位置进行切分,所以你可以使用诸如$if(%artist%,||E:\MusicPic\%artist%)这种脚本。====================================我想知道这个值【$directory_path(%path%)\*.jpg】中【(%path%)\*.jpg】什么意思,我就知道是文件夹下的任意jpg图片。内嵌图片的地址怎么写?
1.问题1那个wsh面板是专辑面板...?2.问题2我把所有面板都搜索了embed字符。只有如下代码开启来有点像还有下图代码剩下所有wsh面板都搜索不到代码...然后这两个地方怎么看也没有个先后顺序啊...
wsh cover脚本开头有清晰的注释说明的如果在前面加embeded不行的话,在fb2k设置-高级-显示,封面里有优先选择内嵌封面。
登录百度帐号获取图片中指定区域图片
最近在搞直接一个类似于二维码的东西,同样也是需要获取其中某个区域的图片。直接上最为主要的一些代码吧。
下面这个是初始化AV部分,这样就可以将图像在view上面展示了。这里简单的阐述一下在这其中碰到的问题和解决方法。
1.如果在layer上面搞出一个&洞 &,就是真正的裁剪区域,在这里用的是CAShapeLayer,利用fillMode,这样就可以通过mask方式作用在将覆盖在perviewLayer上面的coverLayer了。
2. 我们可以很容易的拿到整个的image,就可以在delegate中的sampleBuffer中拿到了。这里我使用的是AVCaptureVideoDataOutput,这样就可以不断的获取到采样的流了。
3. 从整个image中拿到裁剪区域中的图片。在这个问题上面花了不少时间和心思始终不能正确的拿到裁剪区域的图像。先是用了CGImageFromImage ,一个是才出来的图片位置和大小不对。之后转用cgcontext的方式。但是还是不太对。不断的使用google啊,怎么搞呢,琢磨啊。因为刚开始layer的呈现方式是fill的,这样实际的图片大小并不是和屏幕的大小是一样的。思前想后,可以确定是这个问题了,然后开始吧。针对不同的videoGravity的方式计算出裁剪区域实际在图片中对象的位置和大小,于是就有了一个calcRect的方法,这个方法就是将之前在屏幕上挖出来的&洞&对应到图片中的位置去。
总算是搞出来了。有兴趣的看看吧。
&pre name=&code& class=&objc&&//
ScanView.m
Created by Tommy on 13-11-6.
Copyright (c) 2013年 Tommy. All rights reserved.
#import &ScanView.h&
#import &AVFoundation/AVFoundation.h&
static inline double radians (double degrees) {return degrees * M_PI/180;}
@interface ScanView()&AVCaptureVideoDataOutputSampleBufferDelegate&
@property AVCaptureVideoPreviewLayer* previewL
@property AVCaptureSession*
@property AVCaptureDevice* videoD
@property dispatch_queue_t camera_sample_
@property CALayer* coverL
@property CAShapeLayer* cropL
@property CALayer* stillImageL
AVCaptureStillImageOutput* stillImageO
@property UIImageView* stillImageV
@property UIImage* cropI
@property BOOL hasSetF
@implementation ScanView
- (id)initWithFrame:(CGRect)frame
self = [super initWithFrame:frame];
if (self) {
// Initialization code
self.hasSetFocus = NO;
[self initAVCaptuer];
[self initOtherLayers];
// Only override drawRect: if you perform custom drawing.
// An empty implementation adversely affects performance during animation.
- (void)drawRect:(CGRect)rect
// Drawing code
-(void)layoutSubviews
[self.previewLayer setFrame:self.bounds];
[self.coverLayer setFrame:self.bounds];
self.coverLayer.mask = self.cropL
- (void) initAVCaptuer{
self.cropRect = CGRectZ
self.videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput* input = [[AVCaptureDeviceInput alloc]initWithDevice:self.videoDevice error:nil];
AVCaptureVideoDataOutput* output = [[AVCaptureVideoDataOutput alloc]init];
output.alwaysDiscardsLateVideoFrames = YES;
self.camera_sample_queue = dispatch_queue_create (&com.scan.video.sample_queue&, DISPATCH_QUEUE_SERIAL);
[output setSampleBufferDelegate:self queue:self.camera_sample_queue];
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeK
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[output setVideoSettings:videoSettings];
self.stillImageOutput = [[AVCaptureStillImageOutput alloc]init];
NSDictionary* outputSettings = @{AVVideoCodecKey:AVVideoCodecJPEG};
[self.stillImageOutput setOutputSettings:outputSettings];
self.session = [[AVCaptureSession alloc]init];
self.session.sessionPreset = AVCaptureSessionPresetM
if ([self.session canAddInput:input])
[self.session addInput:input];
if ([self.session canAddOutput:output])
[self.session addOutput:self.stillImageOutput];
[self.session addOutput:output];
self.previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:self.session];
self.previewLayer.videoGravity = AVLayerVideoGravityResizeA
[self.layer addSublayer: self.previewLayer];
// success
self.session =
- (void)setCropRect:(CGRect)cropRect
_cropRect = cropR
if(!CGRectEqualToRect(CGRectZero, self.cropRect)){
self.cropLayer = [[CAShapeLayer alloc] init];
CGMutablePathRef path = CGPathCreateMutable();
CGPathAddRect(path, nil, self.cropRect);
CGPathAddRect(path, nil, self.bounds);
[self.cropLayer setFillRule:kCAFillRuleEvenOdd];
[self.cropLayer setPath:path];
[self.cropLayer setFillColor:[[UIColor whiteColor] CGColor]];
[self.cropLayer setNeedsDisplay];
//[self setVideoFocus];
[self.stillImageLayer setFrame:CGRectMake(100, 450, CGRectGetWidth(cropRect), CGRectGetHeight(cropRect))];
- (void) setVideoFocus{
CGPoint foucsPoint = CGPointMake(CGRectGetMidX(self.cropRect), CGRectGetMidY(self.cropRect));
if([self.videoDevice isFocusPointOfInterestSupported]
&&[self.videoDevice lockForConfiguration:&error] &&!self.hasSetFocus){
self.hasSetFocus = YES;
[self.videoDevice setFocusPointOfInterest:[self convertToPointOfInterestFromViewCoordinates:foucsPoint]];
[self.videoDevice setFocusMode:AVCaptureFocusModeContinuousAutoFocus];
[self.videoDevice unlockForConfiguration];
[self.videoDevice setFocusMode:AVCaptureFocusModeAutoFocus];
NSLog(@&error:%@&,error);
- (CGPoint)convertToPointOfInterestFromViewCoordinates:(CGPoint)viewCoordinates
CGPoint pointOfInterest = CGPointMake(.5f, .5f);
CGSize frameSize = self.frame.
AVCaptureVideoPreviewLayer *videoPreviewLayer = self.previewL
if ([self.previewLayer isMirrored]) {
viewCoordinates.x = frameSize.width - viewCoordinates.x;
if ( [[videoPreviewLayer videoGravity] isEqualToString:AVLayerVideoGravityResize] ) {
pointOfInterest = CGPointMake(viewCoordinates.y / frameSize.height, 1.f - (viewCoordinates.x / frameSize.width));
CGRect cleanA
for (AVCaptureInputPort *port in [[[[self session] inputs] lastObject] ports]) {
if ([port mediaType] == AVMediaTypeVideo) {
cleanAperture = CMVideoFormatDescriptionGetCleanAperture([port formatDescription], YES);
CGSize apertureSize = cleanAperture.
CGPoint point = viewC
CGFloat apertureRatio = apertureSize.height / apertureSize.
CGFloat viewRatio = frameSize.width / frameSize.
CGFloat xc = .5f;
CGFloat yc = .5f;
if ( [[videoPreviewLayer videoGravity] isEqualToString:AVLayerVideoGravityResizeAspect] ) {
if (viewRatio & apertureRatio) {
CGFloat y2 = frameSize.
CGFloat x2 = frameSize.height * apertureR
CGFloat x1 = frameSize.
CGFloat blackBar = (x1 - x2) / 2;
if (point.x &= blackBar && point.x &= blackBar + x2) {
xc = point.y / y2;
yc = 1.f - ((point.x - blackBar) / x2);
CGFloat y2 = frameSize.width / apertureR
CGFloat y1 = frameSize.
CGFloat x2 = frameSize.
CGFloat blackBar = (y1 - y2) / 2;
if (point.y &= blackBar && point.y &= blackBar + y2) {
xc = ((point.y - blackBar) / y2);
yc = 1.f - (point.x / x2);
} else if ([[videoPreviewLayer videoGravity] isEqualToString:AVLayerVideoGravityResizeAspectFill]) {
if (viewRatio & apertureRatio) {
CGFloat y2 = apertureSize.width * (frameSize.width / apertureSize.height);
xc = (point.y + ((y2 - frameSize.height) / 2.f)) / y2;
yc = (frameSize.width - point.x) / frameSize.
CGFloat x2 = apertureSize.height * (frameSize.height / apertureSize.width);
yc = 1.f - ((point.x + ((x2 - frameSize.width) / 2)) / x2);
xc = point.y / frameSize.
pointOfInterest = CGPointMake(xc, yc);
return pointOfI
- (void) initOtherLayers{
self.coverLayer = [CALayer layer];
self.coverLayer.backgroundColor = [[[UIColor blackColor] colorWithAlphaComponent:0.6] CGColor];
[self.layer addSublayer:self.coverLayer];
if(!CGRectEqualToRect(CGRectZero, self.cropRect)){
self.cropLayer = [[CAShapeLayer alloc] init];
CGMutablePathRef path = CGPathCreateMutable();
CGPathAddRect(path, nil, self.cropRect);
CGPathAddRect(path, nil, self.bounds);
[self.cropLayer setFillRule:kCAFillRuleEvenOdd];
[self.cropLayer setPath:path];
[self.cropLayer setFillColor:[[UIColor redColor] CGColor]];
self.stillImageLayer = [CALayer layer];
self.stillImageLayer.backgroundColor = [[UIColor yellowColor] CGColor];
self.stillImageLayer.contentsGravity = kCAGravityResizeA
[self.coverLayer addSublayer:self.stillImageLayer];
self.stillImageView = [[UIImageView alloc]initWithFrame:CGRectMake(0,300, 100, 100)];
self.stillImageView.backgroundColor = [UIColor redColor];
self.stillImageView.contentMode = UIViewContentModeScaleAspectF
[self addSubview:self.stillImageView];
self.previewLayer.contentsGravity = kCAGravityResizeA
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
[self setVideoFocus];
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
self.cropImage = [self cropImageInRect:image];
dispatch_async(dispatch_get_main_queue(), ^{
[self.stillImageView setImage:image];
// [self.stillImageLayer setContents:(id)[self.cropImage CGImage]];
// 通过抽样缓存数据创建一个UIImage对象
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
// 为媒体数据设置一个CMSampleBuffer的Core Video图像缓存对象
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// 锁定pixel buffer的基地址
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// 得到pixel buffer的基地址
voidvoid *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// 得到pixel buffer的行字节数
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// 得到pixel buffer的宽和高
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
//NSLog(@&%zu,%zu&,width,height);
// 创建一个依赖于设备的RGB颜色空间
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// 用抽样缓存的数据创建一个位图格式的图形上下文(graphics context)对象
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// 根据这个位图context中的像素数据创建一个Quartz image对象
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// 解锁pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// 释放context和颜色空间
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
cgimageget`
// 用Quartz image创建一个UIImage对象image
//UIImage *image = [UIImage imageWithCGImage:quartzImage];
UIImage *image = [UIImage imageWithCGImage:quartzImage scale:1.0f orientation:UIImageOrientationRight];
// 释放Quartz image对象
CGImageRelease(quartzImage);
return (image);
- (CGRect) calcRect:(CGSize)imageSize{
NSString* gravity = self.previewLayer.videoG
CGRect cropRect = self.cropR
CGSize screenSize = self.previewLayer.bounds.
CGFloat screenRatio = screenSize.height / screenSize.
CGFloat imageRatio = imageSize.height /imageSize.
CGRect presentImageRect = self.previewLayer.
CGFloat scale = 1.0;
if([AVLayerVideoGravityResizeAspect isEqual: gravity]){
CGFloat presentImageWidth = imageSize.
CGFloat presentImageHeigth = imageSize.
if(screenRatio & imageRatio){
presentImageWidth = screenSize.
presentImageHeigth = presentImageWidth * imageR
presentImageHeigth = screenSize.
presentImageWidth = presentImageHeigth / imageR
presentImageRect.size = CGSizeMake(presentImageWidth, presentImageHeigth);
presentImageRect.origin = CGPointMake((screenSize.width-presentImageWidth)/2.0, (screenSize.height-presentImageHeigth)/2.0);
}else if([AVLayerVideoGravityResizeAspectFill isEqual:gravity]){
CGFloat presentImageWidth = imageSize.
CGFloat presentImageHeigth = imageSize.
if(screenRatio & imageRatio){
presentImageHeigth = screenSize.
presentImageWidth = presentImageHeigth / imageR
presentImageWidth = screenSize.
presentImageHeigth = presentImageWidth * imageR
presentImageRect.size = CGSizeMake(presentImageWidth, presentImageHeigth);
presentImageRect.origin = CGPointMake((screenSize.width-presentImageWidth)/2.0, (screenSize.height-presentImageHeigth)/2.0);
NSAssert(0, @&dont support:%@&,gravity);
scale = CGRectGetWidth(presentImageRect) / imageSize.
CGRect rect = cropR
rect.origin = CGPointMake(CGRectGetMinX(cropRect)-CGRectGetMinX(presentImageRect), CGRectGetMinY(cropRect)-CGRectGetMinY(presentImageRect));
rect.origin.x /=
rect.origin.y /=
rect.size.width /=
rect.size.height
#define SUBSET_SIZE 360
- (UIImage*) cropImageInRect:(UIImage*)image{
CGSize size = [image size];
CGRect cropRect = [self calcRect:size];
float scale = fminf(1.0f, fmaxf(SUBSET_SIZE / cropRect.size.width, SUBSET_SIZE / cropRect.size.height));
CGPoint offset = CGPointMake(-cropRect.origin.x, -cropRect.origin.y);
size_t subsetWidth = cropRect.size.width *
size_t subsetHeight = cropRect.size.height *
CGColorSpaceRef grayColorSpace = CGColorSpaceCreateDeviceGray();
CGContextRef ctx =
CGBitmapContextCreate(nil,
subsetWidth,
subsetHeight,
grayColorSpace,
kCGImageAlphaNone|kCGBitmapByteOrderDefault);
CGColorSpaceRelease(grayColorSpace);
CGContextSetInterpolationQuality(ctx, kCGInterpolationNone);
CGContextSetAllowsAntialiasing(ctx, false);
// adjust the coordinate system
CGContextTranslateCTM(ctx, 0.0, subsetHeight);
CGContextScaleCTM(ctx, 1.0, -1.0);
UIGraphicsPushContext(ctx);
CGRect rect = CGRectMake(offset.x * scale, offset.y * scale, scale * size.width, scale * size.height);
[image drawInRect:rect];
UIGraphicsPopContext();
CGContextFlush(ctx);
CGImageRef subsetImageRef = CGBitmapContextCreateImage(ctx);
UIImage* subsetImage = [UIImage imageWithCGImage:subsetImageRef];
CGImageRelease(subsetImageRef);
CGContextRelease(ctx);
return subsetI
- (void) start{
dispatch_sync (self.camera_sample_queue, ^{
[self.session startRunning]; });
- (void) stop{
if(self.session){
[self.session stopRunning];
&/pre&&br&&br&1, You can UPLOAD any files, but there is 20Mb limit per file. 2,
VirSCAN supports Rar/Zip decompression, but it must be less than 20 files. 3, VirSCAN can scan compressed files with password 'infected' or 'virus'.
Portuguese Brazil
Русский
укра?нська
Nederlands
Espa?ol (Latin America)
Server load
Same filename scan history
File Name:
Total find 5same name files,is safe 1 ,is unsafe 4 。File Name “Q3DCover.exe” 80% maybe a virus
Scan Result
File Name/MD5/SHA1
Detect Rate
File Name:
PE32 executable for MS Windows (GUI) Intel 80386 32-bit
File Name:
PE32 executable for MS Windows (GUI) Intel 80386 32-bit
File Name:
PE32 executable for MS Windows (GUI) Intel 80386 32-bit
File Name:
PE32 executable for MS Windows (GUI) Intel 80386 32-bit
File Name:
PE32 executable for MS Windows (GUI) Intel 80386 32-bit
File upload
Please not close this windows,
If you do not have to upload response time, make sure you upload files less than 20M
You can view the results of the last scan or rescanHA-Q3DCover-Quick 3D CoverDesignTemplatesExtrasTutorials-免费论文
欢迎来到网学网学习,获得大量论文和程序!
HA-Q3DCover-Quick 3D CoverDesignTemplatesExtrasTutorials
请下载论文,论文或程序为HA-Q3DCover.doc或HA-Q3DCover.rar格式,只上传部分目录查看,如果需要此论文或程序,请点击-下载论文,下载需要资料或是论文。HA-Q3DCover,因文件夹或是目录太多,只读取部分文件显示,需要就下载参考,如果没有目录表示HA-Q3DCover为单文件程序或是压缩文件。请点击下面"下载"链接下载!。目录名称:HA-Q3DCover包括 - 0 文件数, 1 目录数.
目录名称:www.myeducs.cn包括 - 0 文件数, 1 目录数.
目录名称:Quick 3D Cover包括 - 6 文件数, 3 目录数.
目录名称:DesignTemplates包括 - 15 文件数, 0 目录数.
..\100.A1.q2t
..\100.A2.q2t
..\120.A1.q2t
..\120.A2.q2t
..\120.A3.q2t
..\120.A4.q2t
..\120.A5.q2t
..\140.A1.q2t
..\140.A2.q2t
..\200.A1a.q2t
..\200.A1b.q2t
..\900.A1.q2t
..\900.A2.q2t
..\900.B1.q2t
..\900.B2.q2t
目录名称:Extras包括 - 6 文件数, 0 目录数.
..\CD-ROM.png
..\DiscTemplate.gif
..\q3c-disc.jpg
..\q3c-front.gif
..\q3c-side.gif
..\q3c-top.gif
目录名称:Tutorials包括 - 1 文件数, 0 目录数.
..\red_flower.jpg
..\q3c.lng
..\q3clic.txt
..\Q3DCover.chm
..\Q3DCover.exe
..\readme.txt
..\注册方法.txt
【】【】【】【】【】
www.myeducs.cn All Rights Reserved 版权所有?网学网 提供大量原创论文 参考论文 论文资料 源代码 管理及投稿 E_mail:
郑重声明: 本网站论文均来自互联网,由本站会员上传或下载,仅供个人交流、学习、参考之用,版权归原作者所有
请下载完后24小时之内删除,任何人不得大幅抄录、在期刊上发表或作为商业之用。如涉及版权纠纷,本网站不承担任何法律及连带责任。}

我要回帖

更多关于 图片种子怎么打开 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信